This is the 12th day of my participation in the August More Text Challenge. For details, see:August is more challenging
๐ past review
Thank you for reading, I hope it can help you, if there are flaws in the blog, please leave a message in the comment area or add my private chat in the profile of the home page, thank you for your advice. I’m XiaoLin, a man who writes bugs and raps
- ๐ iS MySQL really CRUD? โจ to see the difference between 2K and 12K (part 1)
- ๐ : Redis, are you really familiar with it? ๐
- Does Git really mean Pull and Push? โค
4. Explain performance analysis
4.1 overview,
Use the EXPLAIN keyword to simulate the optimizer’s execution of SQL queries to see how MySQL handles your SQL statements. Analyze your query statement or table structure for performance bottlenecks. Use the syntax: Explatn+SQL statement, which returns several columns of information after execution.
4.2 Data Preparation
create table course
(
cid int(3),
cname varchar(20),
tid int(3));create table teacher
(
tid int(3),
tname varchar(20),
tcid int(3));create table teacherCard
(
tcid int(3),
tcdesc varchar(200));insert into course values(1.'java'.1);
insert into course values(2.'html'.1);
insert into course values(3.'sql'.2);
insert into course values(4.'web'.3);
insert into teacher values(1.'tz'.1);
insert into teacher values(2.'tw'.2);
insert into teacher values(3.'tl'.3);
insert into teacherCard values(1.'tzdesc');insert into teacherCard values(2.'twdesc');insert into teacherCard values(3.'tldesc');Copy the code
4.3, id
The ID column in the result set returned by Explain represents the sequence number of the SELECT query and the order in which the SELECT clause or operation table was executed in the query.
The number of each ID indicates an independent query. The less the number of SQL queries, the better.
Select * from teacher where course number 2 or teacher number 3
EXPLAIN select t.* from teacher t
left join teacherCard tc
on tc.tcid = t.tcid
left join course c
on c.tid = t.tid
where c.cid = 2 or t.tid = 3
Copy the code
As you can see, the id values are the same, in order from the top down.
What if the ID is different?
# query the description of the teacher who teaches SQL (desc)
# If we don't use the subquery, we will find that the id value is the same, so the subquery form is shown below
EXPLAIN select tc.tcdesc from teacherCard tc where tc.tcid =
(select t.tcid from teacher t where t.tid =
(select c.tid from course c where c.cname = 'sql'));Copy the code
For subqueries, the id number increases. The larger the ID value, the higher the priority, and the earlier the query is executed. When subqueries are nested, the inner layer is checked before the outer layer. So first query the C table, then the T table, and finally the TC table.
So if we go further, what if the same one has a different ID?
# query the description of the teacher who teaches SQL (desc)
# We use subquery 1 plus multiple tables for query
explain select t.tname ,tc.tcdesc from teacher t,teacherCard tc where t.tcid= tc.tcid
and t.tid = (select c.tid from course c where cname = 'sql');Copy the code
The larger the ID value, the higher the priority. If the id value is the same, the execution starts from the top down.
4.4, select_type
Select_type indicates the query type. It is used to distinguish complex queries, such as ordinary queries, joint queries, and subqueries.
attribute | meaning |
---|---|
SIMPLE | A simple select query with no subqueries or UNION join queries |
PRIMARY | If the query contains any complex subparts, the outermost query is marked as Primary |
DERIVED | Temporary tables are used |
SUBQUERY | Contains subqueries in SQL (not outermost) |
DEPEDENT SUBQUERY | A SELECT or WHERE list contains subqueries based on the outer layer |
UNCACHEABLE SUBQUERY | The cached subquery cannot be used |
UNION | If you have Table1 union table2, then Table1 is derived and Table2 is union |
UNION RESULT | Tell the developer that a union query exists between those tables |
4.4.1, SIMPLE
A simple select query with no subqueries or UNION join queries.
select * from teacher
Copy the code
4.4.2, PRIMARY
If the query contains any complex subparts, the outermost query is marked as Primary.
4.4.3, DERIVED
When temporary tables are used, they are marked as DERIVED. He has two 1 conditions:
- There is only one table in the FROM subquery.
explain select cr.cname from ( select * from course where tid in (1.2) ) cr ;
Copy the code
- In the from subquery, if there is table1 union table2, then Table1 is derived and Table2 is union
explain select cr.cname from ( select * from course where tid = 1 union select * from course where tid = 2 ) cr ;
Copy the code
4.4.4, UNION
If the second SELECT comes after the UNION, it will be labeled UNION, and the first SELECT will be labeled DERIVED
explain select cr.cname from ( select * from course where tid = 1 union select * from course where tid = 2 ) cr ;
Copy the code
4.4.5, UNION RESULT
Tell the developer that a union query exists between those tables
explain select cr.cname from ( select * from course where tid = 1 union select * from course where tid = 2 ) cr ;
Copy the code
4.5, table
Table indicates which table the data is based on.
# give a more complex SQL
explain select cr.cname from ( select * from course where tid = 1 union select * from course where tid = 2 ) cr ;
Copy the code
Select * from table where id = 1; select * from table where id = 2; select * from table where id = 2;
4.6, type
Type indicates the index type. It is an important indicator. The performance is shown in descending order:
system > const > eq_ref > ref > fulltext > ref_or_null > index_merge > unique_subquery > index_subquery > range > index >ALL
In general, you want to make sure that the query is at least range, preferably ref, where system and const are ideal, but usually hard to reach, usually ref or range.
If you want to optimize type, you must have an index.
4.6.1, system
System tables with only one piece of data or primary queries with only one piece of data in a derived table are generally unreachable and are ignored.
4.6.2, const
Select * from SQL where Primary key (s) and unique (s) are found at one time. It’s also very difficult to achieve.
# to create table
create table user
(
tid int(3),
tname varchar(20));insert into test01 values(1.'xiaolin');commit;
# add index
alter table test01 add constraint tid_pk primary key(tid) ;
# test
explain select * from (select * from test01 )t where tid =1 ;
Copy the code
4.6.3, eq_ref
A unique index scan where only one record in the table matches each index key (one and only one, no more than zero). This is common for primary key or unique index scans. It is not to be found.
explain select t.tcid from teacher t,teacherCard tc where t.tcid = tc.tcid
Copy the code
4.6.4, ref
A non-unique index scan that returns all matched rows for each index key query.
Select * from teacher where name = 'teacher'
alter table teacher add index index_name(tname);
SQL > alter table name = 'name'
explain SELECT * from teacher where tname = 'tw';
Copy the code
4.6.5, range,
Retrieves only rows in a given range, using an index to select rows. A range index scan is better than a full table scan because it only has to start at one point in the index and end at another point. Do not scan the entire index.
# Add a normal index to teacher's TID
alter table teacher add index index_id(tid);
Select * from teacher whose ID is less than 3
explain select * from teacher where tid < 3
Copy the code
4.6.6, index,
Select * from column where all columns are indexed.
# We just indexed the tid
explain select tid from teacher;
Copy the code
4.6.7, all
Query all data in the table, usually without an index. The tname field has no index.
4.6.8,
- System \const: results in only one piece of data.
- Eq_ref: Results in multiple pieces of data, but each piece is unique.
- Ref: Multiple results, but each data is 0 or more.
4.7, possible_keys
Displays one or more indexes that may apply to this table. If an index exists on the field involved in the query, it will be listed, but not necessarily used by the query.
4.8, the key
The actual index used. If NULL, no index is used.
4.9, key_len
Represents the number of bytes used in the index. This column can be used to calculate the length of the index used in the query. The key_len field can help you check if the index is being fully utilized. The longer the ken_len, the more the index is used.
How to calculate key_len:
- Int =4, varchar(20) =20, char(20) =20
- For string fields such as Varchar or CHAR, the character set is multiplied by different values, such as utF-8 by 3 and GBK by 2.
- Archar is a dynamic string with two bytes added.
- Fields that are allowed to be empty add 1 byte.
- For a compound index, the length of key_len is the sum of the current index and all previous indexes.
4.10, ref
Specifies the field to which the current table refers, if possible, to a constant. Which columns or constants are used to find values on index columns.
4.11, rows
The ROWS column shows the number of rows that MySQL believes it must check to execute the query. The less the better!
4.12, Extra
Other additional important information.
4.12.1, Using filesort
For example, if you have an age and a name field, I will first find it by name and then sort it by age.
For single indexes, Using filesort does not appear if the sort and lookup are the same field, and vice versa.
For composite indexes, do not cross columns. Use the best left prefix. Use WHERE and order by in the order of the composite index, not across columns or out of order.
Select * from cid; select * from cid; select * from CID
EXPLAIN select * from course where tid=1 order by cid
Copy the code
4.12.2, Using temporary
The temporary table is used to save the intermediate results, indicating that the performance loss is relatively large. MySQL uses temporary tables when sorting query results. This is common in the sort order BY and group BY queries where you already have a table, but you don’t use it. You must use an additional table for storage.
Avoid Using Temporary by grouping the columns you query.
4.12.3, Using the index
Using index means that the corresponding select operation uses a Covering index, as long as all the columns used are in the index. He ** avoids accessing the data rows of the table, and performance is improved! ** This SQL query does not read the source file, but only obtains data from the index file, not from the original table.
-
If using WHERE is present, the index is being used to perform a lookup of the index key value.
-
If using WHERE is not present at the same time, the index is used only to read data, not to perform lookups.
-
Possible_keys and key can be affected if overwritten:
- If there is no WHERE, the index only appears in the key.
- If there is A WHERE, the index appears in Possible_Keys and key
4.12.4, Using the where
Indicates where filtering is used (both from the index and back to the original table).
Select * from index where id = 1; select * from index where id = 1
explain SELECT * from course c where c.tid =1
Copy the code
4.12.5, Using join buffer
Indicates that connection caching is used.
explain SELECT * from course c,teacher t where t.tid = c.tid
Copy the code
5. Single table SQL optimization
Construct sentence:
CREATE TABLE `dept` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`deptName` VARCHAR(30) DEFAULT NULL.`address` VARCHAR(40) DEFAULT NULL,
ceo INT NULL ,
PRIMARY KEY (`id`))ENGINE=INNODB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
CREATE TABLE `emp` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`empno` INT NOT NULL ,
`name` VARCHAR(20) DEFAULT NULL.`age` INT(3) DEFAULT NULL.`deptId` INT(11) DEFAULT NULL,
PRIMARY KEY (`id`)
#CONSTRAINT `fk_dept_id` FOREIGN KEY (`deptId`) REFERENCES `t_dept` (`id`)
) ENGINE=INNODB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Copy the code
5.1. Full index is my favorite
A full value index is one of my favorite indexes in which all the queried fields are matched in the index in order! We want to determine the order in which the joint index fields are indexed, and not cross columns.
The order in which fields are queried in SQL has nothing to do with the order in which indexes are used. The optimizer will give you automatic optimizations without affecting the results of SQL execution.
indexing
create index index_age_depid_name on emp(age,deptid,name);
# Check the emP index to see if we created the index successfully
show index from emp
Copy the code
Writing SQL tests
EXPLAIN SELECT * FROM emp WHERE emp.age=30;
Copy the code
EXPLAIN SELECT * FROM emp WHERE emp.age=30 and deptid=4;
Copy the code
EXPLAIN SELECT * FROM emp WHERE emp.age=30 and deptid=4 AND emp.name = 'abcd';
Copy the code
5.2. Best left prefix rule
5.2.1 Index failure
explain select * from emp where deptid = 4 and name = "xiaolin"
Copy the code
As we can see, the index is broken at this point.
5.2.2 The index is valid
explain select * from emp where age = 18 and deptid = 4
Copy the code
5.2.3 requires,
The order of the query field and the index field is different, the index cannot be fully used, even the index invalid! When using a compound index, you need to follow the best left prefix rule, that is, if multiple columns are indexed, you need to follow the leftmost prefix rule. This means that the query starts at the top left of the index and does not skip the columns in the index.
To use indexes, filter conditions must be met in the order in which indexes are created. Once a field is skipped, all the fields following the index are unavailable.
Do not do any calculations on index columns
explain select * from emp where age +1 = 18 and deptid = 4
Copy the code
We can see that the index is invalid at this time, because we have +1 operation on the index column — age. We cannot do any operation on the index column (calculation, function, type conversion (automatic or manual)) because it would invalidate the index and move to a full table scan.
Use overwrite indexes whenever possible
Let’s look at the case where overwriting is not applicable.
explain SELECT SQL_NO_CACHE * FROM emp WHERE emp.age=30 and deptid=5 AND emp.name = 'xiaolin';
Copy the code
Let’s look at the case where overwriting indexes are used.
A Using Index appears, indicating an improvement in performance. ** do not write select *** for query and index columns
5.5. Try not to appear prefix fuzzy matching
In daily use, fuzzy matching can be said to use a lot of keywords, in the process of use, we need to avoid using prefix fuzzy matching, because it will cause index failure. That is, like starts with a constant, not with %
# Add an index to the name field
create index index_name on emp(name);
# Test for suffix blur matching
explain select * from emp where name like "a%";
Copy the code
# Test for prefix blur matching
explain select * from emp where name like "%a";
Copy the code
# Blur match before and after test
explain select * from emp where name like "%a";
Copy the code
We can find that whenever there is an ambiguous prefix match, the problem of index failure will occur. If a fuzzy query starting with % is absolutely necessary, we can use index override to improve performance somewhat.
5.6. Reduce the use of OR
# When we use or, it is easy to cause index failure.
explain select * from emp where age = 18 or deptid = 4
Copy the code
If we need to use or in real development, we can use union all or Union instead.
# Use union all instead
explain select * from emp where age = 18 union all select * from emp where deptid = 4;
Copy the code
# Use union instead
explain select * from emp where age = 18 union select * from emp where deptid = 4;
Copy the code
Do not use explicit or implicit conversions
# Try normal first
explain select * from emp where name="123";
Copy the code
# Try index failure again
explain select * from emp where name=123;
Copy the code
The index is invalid because mysql is converting an int of 123 to a varchar of 123.
5.7,
Full time match my favorite, the most left prefix to obey. The leading brother must not die, the middle brother must not be broken. Index column less calculation, after the range of all invalid. LIKE indicates the right end of the percentil, and the overwrite index does not include *. Unequal nulls also have OR, so be aware of the index effects. VAR quotes can not be lost, SQL optimization tricks.
6. Multi-table SQL optimization
6.1. Construct a table sentence
create table teacher2
(
tid int(4) primary key,
cid int(4) not null
);
insert into teacher2 values(1.2);
insert into teacher2 values(2.1);
insert into teacher2 values(3.3);
create table course2
(
cid int(4) ,
cname varchar(20));insert into course2 values(1.'java');
insert into course2 values(2.'python');
insert into course2 values(3.'kotlin');
commit;
Copy the code
6.2, left to join
When we do a join table query, we can think of a question, which table to add the index?
explain select *from teacher2 t left outer join course2 c on t.cid=c.cid where c.cname='java';
Copy the code
In general, we put the small amount of data in the table on the left, a large quantity of data on the right, in the query table, is the left table driven right table, also is the small amount of data in the table driven a large quantity of data table, this is because the underlying SQL queries, is actually two loops, an outer loop, an inner loop, in development, Generally, the data is small loop put outer layer, data large loop put memory.
Indexes are built on frequently used fields, so if there is a left outer join, the index is built on the left table field, and a right outer join, the index is built on the right table field.
As a rule, we indexed the Teacher2 table.
alter table teacher2 add index index_teacher2_cid(cid);
# Execute again
explain select *from teacher2 t left outer join course2 c on t.cid=c.cid where c.cname='java';
Copy the code
We can see that one table is already indexed. Generally, the fields after where are also indexed, so we further optimize.
# Index cname field
alter table course2 add index index_course2_cname(cname);
# Execute again
explain select *from teacher2 t left outer join course2 c on t.cid=c.cid where c.cname='java';
Copy the code
Vii. Other optimization methods
7.1, exist
Exist grammar puts the result of the main query into the subquery for verification (to determine whether the subquery has data; if there is data, the verification succeeds); if the data is consistent with the verification, the data is retained.
select tname from teacher where exists (select * from teacher);
# is equivalent to
select * from teacher;
Copy the code
- If the data set of the main query is large, use
in
. - If the subquery data set is large, use
exist
.
7.2, the order by
7.2.1 sorting algorithm of MySQL
Using filesort is usually used when we use order by. Using filesort there are two algorithms:
- Dual-way sort: Before MySQL4.1, dual-way sort was used by default. The so-called dual-way sort is to scan the disk twice. The sort field is read from disk for the first time and sorted in the buffer. The other fields are scanned a second time. This double IO is very performance intensive.
- Single-way sort: after MySQL4.1, to reduce the number of IO accesses, changed to single-way sort. He reads all the fields once and picks the sort fields out of the buffer for sorting. but
If the max_length_for_sort_data value is too low,MySQL will automatically switch from single to double.
set max_length_for_sort_data = 2048;
Copy the code
7.2.1.1. Dual-way Sorting
MySQL 4.1 used dual sort, which literally means that you scan the disk twice to get the data. The first time you read the row pointer and orderby column and sort them, and then the second time you scan the sorted list and read the corresponding data from the list according to the values in the list. Sort fields are fetched from disk, sorted in buffer, and other fields are fetched from disk.
I\O is known to be time consuming, so after mysql4.1, there is a second improved algorithm, called single-way sort.
7.2.1.2. Single-way sort
Read all the columns required for the query from disk, sort them in buffer by order by column, and scan the sorted list for output. It is more efficient, avoiding a second read. And it turns random I/O into sequential I/o, but it takes more space, because it keeps every row in memory.
7.2.1.3 Problems with single-way Sorting
There are some pitfalls with single-way sorting, it may not be one IO, it may be multiple IO. Because if you have a lot of data you’re going to split it up, you’re going to split it up multiple times and you’re going to sort it in buffer, you’re going to read it in pieces, you’re going to read it multiple times. We can use SQL statements to increase the size of the buffer.
7.2.2. Improving the order by query policy
7.2.2.1 Increasing the sort_butter_size parameter
Regardless of the algorithm, increasing this parameter will improve efficiency, of course, depending on the capability of the system, because this parameter is adjusted between 1m-8m per process.
7.2.2.2. Increase the max_LENGTH_for_sort_data parameter
Max_length_for_sort_data = max_length_for_sort_data
Increasing this parameter increases the probability of using the improved algorithm. However, if set too high, the probability of the total data capacity exceeding sort_buffer_size increases, with obvious symptoms of high disk I/O activity and low processor utilization. (Adjust between 1024 and 8192).
7.2.2.3 reduce the query fields after select
When total field size is smaller than max_length_for_sort_data Query and sort field is not a TEXT | BLOB, will use the improved algorithm, single channel sorting, or with the old algorithm, multiple sort.
The sort_buffer may exceed the capacity of the sort_buffer. When the data exceeds the capacity, TMP files are created for merge sorting, resulting in multiple I/ OS. However, single-way sorting is more risky, so the sort_buffer size should be increased.
So never use select *… ; .
7.2.2.4 Using an overwrite index
SQL only needs to use the index to return the data required by the query, instead of using the secondary index to find the primary key to query the data.
7.2.2.5. Ensure the consistency of sorting
We want to make sure that all the sort fields are sorted coherently, either all ascending or all descending, not some ascending, some descending.
8. Query logs slowly
8.1. Slowly Querying logs
8.1.1 What is it
The slow query log of MySQL is a type of log record provided by MySQL. It is used to record the statements whose response time exceeds the threshold in MySQL. In particular, the SQL whose running time exceeds the value of long_query_time is recorded in the slow query log.
The SQL whose running time exceeds the value of long_query_time is recorded in the slow query log. The default value of long_query_time is 10, meaning that statements longer than 10 seconds are run.
He will check which SQL exceeds our maximum endurance time value. For example, if an SQL is executed for more than 5 seconds, we will consider it as slow SQL and hope to collect the SQL for more than 5 seconds and conduct a comprehensive analysis in combination with the previous explain.
8.1.2. How to use it
By default, slow query log is not enabled for MySQL database. You need to set this parameter manually.
Of course, it is not recommended to enable this parameter unless it is required for tuning, because enabling slow query logs can have some performance impact. In slow log query, log records can be written to files. It is usually opened during development and closed during online deployment.
# Check whether slow log query is enabled. By default, off indicates that slow log query is disabled.
show variables like '%slow_query_log';
Copy the code
MySQL > open MySQL > open MySQL > open MySQL > open MySQL
set global slow_query_log = 1;
Copy the code
Select * from MySQL where you want to open the MySQL configuration file
MySQL > select * from 'config' where 'config' = 'MySQL'
vim /etc/my.cnf
Copy the code
Add two lines of configuration in [mysqld].
# Enable slow log query
slow_query_log=1
# Specify the path where slow query logs are stored
slow_query_log_file=/var/lib/mysql/localhost-slow.log
Copy the code
# Query the threshold of slow query
show variables like '%long_query_time%';
Copy the code
Set slow query threshold
# Temporary Settings. After the Settings are completed, you need to log in again to take effect
set global long_query_time = 5;
Copy the code
Select * from MySQL where slow query is enabled; select * from MySQL where slow query is enabled;
MySQL > select * from 'config' where 'config' = 'MySQL'
vim /etc/my.cnf
# append to [mysqld
long_query_time=3
Copy the code
# Query the number of SQL entries that exceed the slow query threshold
# sleep 4S, simulate a SQL that exceeds 4s
select sleep(4);
show global status like '%slow_queries%';
Copy the code
If we want to know what SQL is, we need to query in the slow query log file we just specified
cat /var/lib/mysql/localhost-slow.log
Copy the code
8.2. Log analysis tool mysqlDumpslow
We can find that using native slow query logs is very unfriendly. We can use mysql’s own log analysis tool, mysqlDumpSlow, to analyze slow queries.
#View help information for mysqlDumpslow in Linux
mysqldumpslow --help
Copy the code
parameter | describe |
---|---|
-s | Is to indicate how to sort |
c | visits |
l | Locking time |
r | Return the records |
t | The query time |
al | Average lock time |
ar | Average number of records returned |
at | Average query time |
-t | Returns the number of previous entries |
-g | Followed by a regular matching pattern, case insensitive |
#Get the 10 SQL that return the most recordsets
mysqldumpslow -s r -t 10 /var/lib/mysql/atguigu-slow.log
#Get the top 10 SQL accesses
mysqldumpslow -s c -t 10 /var/lib/mysql/atguigu-slow.log
#Get the top 10 queries sorted by time with left joins
mysqldumpslow -s t -t 10 -g "left join" /var/lib/mysql/atguigu-slow.log
#Additional Suggestions when using these commands | and more use, otherwise likely explodes the screen
mysqldumpslow -s r -t 10 /var/lib/mysql/atguigu-slow.log | more
Copy the code
Nine, lock
9.1 What is the Locking Mechanism
There are many types of locks in MySQL, such as row locks, which we are most familiar with, so the question is, why do MySQL have row locks? The reason is that MySQL wants to ensure data consistency. When data is updated, the current read (reading the latest data) should be performed first to get the data, and the data should be ensured that the data will not be changed by other transactions between the time the data is detected and the change is completed. This way your update statement execution results are semantically “consistent”.
9.2 Classification of locks
- According to operation type:
- Read lock (shared lock) : Multiple read operations on the same data can be performed simultaneously without interfering with each other.
- Write lock (mutex) : If the current write operation is not completed, no other reads can be performed (equivalent to A buying clothes that was originally displayed in the dressing room, B can not even look at) write operation.
- Scope of operation:
- Table lock: lock a whole table at one time, on a whole table chains, coarse granularity. MyISAM storage engine uses table locks, low overhead, fast locking, no deadlock, but the lock range is large, easy to occur lock conflict, low concurrency.
- Row lock: lock a piece of data at one time, fine granularity is not easy to conflict, InnoDB storage engine uses row lock, high cost, slow lock, easy to deadlock, lock range is small, high concurrency is very small probability of dirty read, magic read, non-repetitive read and other high concurrency problems.
- Page locks
9.3. Lock Operations
/* MYSQL/SQLSERVER supports autoincrement. Oracle needs sequence to realize autoincrement */
create table tablelock
(
id int primary key auto_increment,
name varchar(20))engine myisam;
insert into tablelock(name) values('a1');
insert into tablelock(name) values('a2');
insert into tablelock(name) values('a3');
insert into tablelock(name) values('a4');
insert into tablelock(name) values('a5');
Copy the code
9.3.1 and table locks
9.3.1.1 Adding read/Write Locks
Add a read or write lock to a table. Add a read or write lock to a table
lock table ่กจ1 read/writelock table ่กจ2 read/write;
Copy the code
9.3.1.2 Viewing locks
# View the locked table. 1 indicates the locked table
show open tables;
Copy the code
9.3.1.3. Release the lock
# releases the lock
unlock tables;
Copy the code
9.3.1.4 Analyze the severity of table locking
show status like 'table%';
Copy the code
His results have two lines of data:
-
Table_locks_immedicate: indicates the number of locks that can be obtained immediately
-
Tbale_locks_waited: indicates the number of table locks to wait for. The greater the value, the more fierce the lock competition is
It is recommended to use the value of Table_locks_immedicate/Tbale_locks_waited. If the value is greater than 5000, use InnoDB engine; otherwise, use MyISAM engine.
9.3.1.5,
-
If A session has A read lock on table A, the session can perform read operations on table A but cannot perform write operations on the table. The session cannot perform any read or write operations on tables other than table A.
-
In simple terms, if A read lock is assigned to table A, the current session can only read from table A.
-
Other sessions can read or write the table, but they must wait for the session with the lock to release the lock.
-
For a session with a write lock, the current session can add, delete, modify, or query the table with a write lock, but cannot add, delete, modify, or query other tables. Other sessions can add, delete, modify, and query only after the current session releases the lock.
9.3.2, row locks
Row lock, lock data one row at a time, so if the operation is different data, no interference.
create table linelock
(
id int(5) primary key auto_increment,
name varchar(20))engine=innodb;
insert into linelock(name) values('1');
insert into linelock(name) values('2');
insert into linelock(name) values('3');
insert into linelock(name) values('4');
insert into linelock(name) values('5');
Copy the code
To investigate row locking, we need to temporarily turn off automatic commit so that we can commit manually.
set autocommit = 0;
Copy the code
9.3.2.1 Summary of row locks
- Table locks can be unlocked by unlock tables or by transactions. . Row locks are unlocked by a transaction (COMMIT/ROLLBACK).
- If session X performs a DML operation on a piece of data A (at study time: with automatic commit turned off), the other sessions must wait for session X to finish the transaction (COMMIT /rollback) before they can perform any operation on data A.
9.3.2.2 Precautions for Row Locking
If there is no index, the row lock is converted to a table lock.
show index from linelock ;
alter table linelock add index idx_linelock_name(name);
# Index not invalid
# Write to session 0
update linelock set name = 'ai' where name = '3' ;
# Session 1 performs a write operation on different data
update linelock set name = 'aiX' where name = '4' ;
# Index invalid (index type conversion occurred)
# Write to session 0
update linelock set name = 'ai' where name = 3 ;
# Session 1 writes to different data
update linelock set name = 'aiX' where name = 4 ;
Copy the code
You can see that the data is blocked (locked) because the index type has been cast to invalidate the index, so the operation will go from a row lock to a table lock.
There is a very special case of row locking
The value is in the range but does not exist. This is called a gap lock. Update linelock set name =’x’ where id >1 and id<9; update linelock set name =’x’ where id >1 and id<9; , in the range of where in China, there is no data with ID =7 in this range, but there is no data with ID =7, then the data with ID =7 becomes a gap. MySQL automatically locks the gap, called the gap lock, which is also a row lock. MySQL will automatically lock rows with id=7.
9.3.2.3 Four Methods for Disabling automatic Submission
- set autocommit =0 ;
- start transaction ;
- begin ;
- Add for update to SQL.
We can also add row locks to the query, using only the fourth method.
Lock query statements with for update.
select * from linelock where id =2 for update ;
Copy the code
9.3.2.4 Row lock Analysis
Row locks can be analyzed using SQL statements.
show status like '%innodb_row_lock%' ;
Copy the code
It has four parameters:
- Innodb_row_lock_current_waits: The number of locks currently waiting.
- Innodb_row_lock_time: total wait time. The total amount of time since the system was started.
- Innodb_row_lock_time_avg: Average wait time. Average wait time since system inception.
- Innodb_row_lock_time_max: specifies the maximum wait time. Maximum wait time since system startup.
- Innodb_row_lock_waits: Waits. Total number of waits since the system was started