Welcome to pay attention to github.com/hsfxuebao, I hope to help you, if you think it can trouble to click on the Star ha
In database tuning, our goal is faster response time and greater throughput. Using macro monitoring tools and micro log analysis can help us quickly find tuning ideas and ways.
1. Optimize the database server
How do we think about database tuning? Here’s a picture of the flow of thinking.
The whole process is divided into two parts: Show Status and Action. The part of the letter S represents observation (using the appropriate analysis tool) and the part of the letter A represents action (actions that the analysis can take).
If A2 and A3 do not solve the problem, we need to consider whether the SQL query performance of the database itself has reached the bottleneck. If we confirm that the performance bottleneck has not been reached, we need to re-check and repeat the above steps. If you have reached the performance bottleneck and entered the A4 phase, you need to consider adding servers, adopting a read-write architecture, or considering the database into separate tables, such as vertical table, vertical table and horizontal table, etc.
The above is the database tuning process idea. If we find that there is irregular delay or lag in SQL execution, we can use analysis tools to help us locate problematic SQL. These three analysis tools you can understand are the three steps of SQL tuning: slow query, EXPLA workn, and SHOW PROFILINGₑ
2. View system performance parameters
In MySQL, you can use the SHOW STATUS statement to query the performance parameters and execution frequency of some MySQL database servers. The syntax of the SHOW STATUS statement is as follows:
SHOW [GLOBAL | SESSION] STATUS LIKE 'parameters';Copy the code
Some common performance parameters are as follows:
- Connections: indicates the number of times that the MySQL server is connected.
- Uptime: indicates the online time of the MySQL server.
- Slow_queries: indicates The Times of slow queries.
- Innodb_rows_read: Select the number of rows returned by the query
- Innodb_rows_inserted: Number of rows inserted by an INSERT operation
- Innodb_rows_updated: Number of rows updated by the UPDATE operation
- Innodb_rows_deleted: indicates the number of rows deleted by the DELETE operation
- Com_select: indicates the number of query operations.
- Com_insert: indicates the number of insert operations. For the INSERT operation of batch inserts, the sum is only once.
- Com_update: indicates the number of update operations.
- Com_delete: indicates the number of delete operations.
2.1 summary
- To query the number of connections to the MySQL server, run the following statement:
SHOW STATUS LIKE 'Connections';
Copy the code
- To query the server working time, run the following statement:
SHOW STATUS LIKE 'Uptime';Copy the code
- To query the slow query times of the MySQL server, run the following statement:
SHOW STATUS LIKE 'Slow_queries,;Copy the code
The number of slow query times parameter can be combined with the slow query logs to find the slow query statements, and then optimize the table structure or query statement for the slow query statements.
For example, the following commands can be used to view the related commands:
SHOW STATUS LIKE 'Innodb_rows_%,;Copy the code
3. Run the command last_query_cost to collect the query cost of SQL
Before an SQL query is executed, a query execution plan must be determined. If multiple execution plans exist, MySQL calculates the cost of each execution plan and selects the one with the lowest cost as the final execution plan.
If we want to see the cost of a query for an SQL statement, we can get the cost of the current query by looking at the value of the last_query_cost variable in the current session after executing the SQL statement. It is also a common metric we use to evaluate the efficiency of a query’s execution. This query cost corresponds to the number of pages that the SQL statement needs to read.
The student_info table is used as an example:
CREATE TABLE `student_info` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`student_id` INT NOT NULL ,
`name` VARCHAR(20) DEFAULT NULL,
`course_id` INT NOT NULL ,
`class_id` INT(11) DEFAULT NULL,
`create_time` DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
) ENGINE=INNODB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Copy the code
If we want to query the record with ID =900001 and then look at the cost of the query, we can directly look it up on the cluster index:
SELECT student_id, class_id, NAME, create_time FROM student_info
WHERE id = 900001;
Copy the code
Running result (1 record, running time 0.042s)
Then looking at the cost of querying the optimizer that executed the SQL last time, we actually only need to retrieve one page:
mysql> SHOW STATUS LIKE 'last_query_cost'; + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- + | Variable_name Value | | + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- + | Last_query_cost | | 1.000000 +-----------------+----------+Copy the code
What if we want to query student records with ids between 900001 and 9000100?
SELECT student_id, class_id, NAME, create_time FROM student_info
WHERE id BETWEEN 900001 AND 900100;
Copy the code
Run result (100 records, run time 0.046s) : Then we look at the cost of querying the optimizer that executed the SQL last time, which is about 20 pages of queries.
mysql> SHOW STATUS LIKE 'last_query_cost'; + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- + | Variable_name Value | | + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- + | Last_query_cost | 21.134453 | +-----------------+-----------+Copy the code
You can see that there are 20 times as many pages, but the efficiency of the query has not changed significantly. In fact, the time of the two SQL queries is basically the same because the pages are loaded into the buffer pool in a sequential manner and then searched. Although the number of pages (last_query_cost) increased quite a bit, there was not much increase in query time through the buffer pool mechanism.
Usage scenario: It is very useful for comparing overhead, especially if we have several query methods to choose from.
SQL queries are a dynamic process, and from a page loading perspective, we can draw two conclusions:
Location determined efficiency
. If the page is in the databaseThe buffer pool
In, then the efficiency is the highest, otherwise also needFrom memory or disk
Reading, of course, for individual pages, is much more efficient if the pages are in memory than if they are on disk.Batch-determined efficiency
. If we randomly read a single page from disk, then the efficiency is very low (about 10ms), but with sequential reading, the average efficiency of reading a page in batches is much higher, even faster than the average efficiency of reading a single page in memory.
Therefore, encounter I/O do not worry about, the method to find the right, the efficiency is very high. First of all, we should consider the location of data storage. If the data is frequently used, we should try to put it into the buffer pool. Second, we can make full use of the disk throughput capacity to read data in batches at a time, so that the efficiency of reading a single page is improved.
4. Locate the SQL that is slowly executed: Logs are slowly queried
The slow query log of MySQL is used to record statements whose response time exceeds the threshold. In particular, the SQL statements whose response time exceeds the value of long_query_time are recorded in the slow query log. The default value of long_query_time is 10, meaning that sentences that run for more than 10 seconds (excluding 10 seconds) are considered to have exceeded our maximum endurance.
Its main role is to help us find SQL queries that take particularly long to execute and optimize them accordingly to improve the overall efficiency of the system. When our database server is blocked and running slowly, it helps to check the slow query log to find those slow queries. For example, SQL execution is more than 5 seconds, we even slow SQL, hope to collect more than 5 seconds of SQL, combined with explain for comprehensive analysis.
By default, slow query logging is disabled for the MySQL database. You need to set this parameter manually. It is not recommended to enable this parameter unless necessary for tuning, because slow query logging may have some performance impact.
Slow query logs can be written to a file.
4.1 Enabling Slow Log Query Parameters
4.4.1 open slow_query_log
set global slow_query_log='ON';
Copy the code
Then check whether the slow query log is enabled and the location of the slow query log file:
You can see that slow query analysis is enabled at this point and the file is saved in /var/lib/mysql.atguigu02-slow.log.
4.1.2 Changing the long_query_time threshold
To set the time threshold for slow queries, use the following command:
show variables like '%long_query_time%';
Copy the code
If we want to shorten the time, for example, to 1 second, we can set it like this:
The long_query_time of the current session is invalid. This parameter is valid for newly connected clients. Mysql > set global long_query_time = 1; mysql> show global variables like '%long_query_time%'; mysql> set long_query_time=1; mysql> show variables like '%long_query_time%';Copy the code
4.2 Viewing the Number of Slow Queries
Query the number of slow query records in the current system
SHOW GLOBAL STATUS LIKE '%Slow_queries%';
Copy the code
4.3 Case Demonstration
Step 1. Create a table:
CREATE TABLE `student` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`stuno` INT NOT NULL ,
`name` VARCHAR(20) DEFAULT NULL,
`age` INT(3) DEFAULT NULL,
`classId` INT(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=INNODB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Copy the code
Step 2: Set parameter log_bin_trust_function_Creators
Create the function if an error is reported
This function has none of DETERMINISTIC......
Copy the code
Command on: allows to create function Settings:
set global log_bin_trust_function_creators=1; The current window is valid only without global.Copy the code
Step 3: Create a function that randomly generates strings:
DELIMITER // CREATE FUNCTION rand_string(n INT) RETURNS VARCHAR(255) # BEGIN DECLARE CHARs_str VARCHAR(100) DEFAULT 'abcdefghijklmnopqrstuvwxyzABCDEFJHIJKLMNOPQRSTUVWXYZ'; DECLARE return_str VARCHAR(255) DEFAULT ''; DECLARE i INT DEFAULT 0; WHILE i < n DO SET return_str =CONCAT(return_str,SUBSTRING(chars_str,FLOOR(1+RAND()*52),1)); SET i = i + 1; END WHILE; RETURN return_str; END // DELIMITER ; # test SELECT rand_string(10);Copy the code
Generate random values:
DELIMITER // CREATE FUNCTION rand_num (from_num INT ,to_num INT) RETURNS INT(11) BEGIN DECLARE i INT DEFAULT 0; SET i = FLOOR(from_num +RAND()*(to_num - from_num+1)) ; RETURN i; END // DELIMITER ; # test: SELECT rand_num(10,100);Copy the code
Step 4: Create the stored procedure
DELIMITER // CREATE PROCEDURE insert_stu1( START INT , max_num INT ) BEGIN DECLARE i INT DEFAULT 0; SET autocommit = 0; REPEAT # SET I = I + 1; # INSERT INTO student (stuno, NAME,age,classId) VALUES ((START+ I),rand_string(6),rand_num(10,100),rand_num(10,1000)); UNTIL i = max_num END REPEAT; COMMIT; END // DELIMITER;Copy the code
Step 5: Invoke the stored procedure
Insert_stu1 (100001,4000000);Copy the code
4.4 Test and analysis
4.4.1 test
mysql> SELECT * FROM student WHERE stuno = 3455655; +---------+---------+--------+------+---------+ | id | stuno | name | age | classId | +---------+---------+--------+------+---------+ | 3523633 | 3455655 | oQmLUr | 19 | 39 | + + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + + -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- + 1 row in the set (2.09 SEC) mysql > SELECT * FROM student WHERE name = 'oQmLUr'; +---------+---------+--------+------+---------+ | id | stuno | name | age | classId | +---------+---------+--------+------+---------+ | 1154002 | 1243200 | OQMlUR | 266 | 28 | | 1405708 | 1437740 | OQMlUR | 245 | 439 | | 1748070 | 1680092 | OQMlUR | 240 | 414 | | 2119892 | 2051914 | oQmLUr | 17 | 32 | | 2893154 | 2825176 | OQMlUR | 245 | 435 | | 3523633 | 3455655 | oQmLUr | 19 | 39 | +---------+---------+--------+------+---------+ 6 rows in Set (2.39 SEC)Copy the code
As can be seen from the above results, it takes 2.09 seconds to query student information whose id is “3455655”. It takes 2.39 seconds to query student information whose name is “oQmLUr”. It has reached the order of seconds, indicating that the current query efficiency is relatively low. The following section will analyze the reasons.
4.4.2 analysis
show status like 'slow_queries';
Copy the code
Note: In addition to the above variables, there is another system variable that controls slow query logs: min_examined_ROW_limit. This variable means the minimum number of records scanned by the query. This variable, along with the query execution time, constitutes the criterion for determining whether a query is slow. If the number of records scanned by the query is greater than or equal to the value of this variable, and the query execution time exceeds the value of long_query_time, the query is logged to the slow query. Otherwise, the logs are not recorded in slow query logs.
mysql> show variables like ‘min%’; +————————+—+ | Variable_name | Value | +————————+—+ | min_examined_row_limit | 0 | + — — — — — — — — — — — — — — — — — — — — — — — – + – + 1 row in the set, 1 warning (0.00 SEC), the value of the default is 0. When combined with long_query_time, the query execution time exceeds 10 seconds and no record is scanned, the query is recorded in the slow query log. You can also modify the query duration as needed by modifying the “my.ini” file, or by using the SET command in an SQL statement
4.5 Slow Query Log Analysis Tool: mysqlDumpslow
In a production environment, if you need to manually analyze logs, find and analyze SQL, it is obviously a manual task. MySQL provides the log analysis tool mysqlDumpslow.
You can view slow query logs with mysqlDumpslow
See help for mysqlDumpslow
mysqldumpslow --help
Copy the code
mysqldumpslow
The parameters of the command are as follows:
-a
: Do not abstract numbers to N and strings to S-s
Is in what order:- C: Access times
- L: Lock time
- R: Returns a record
- T: Query time
- Al: Average lock time
- Ar: average number of returned records
- At: Average query time (default)
- Ac: indicates the average number of queries
-t
: That is, how many previous columns of data are returned;-g
: Followed by a regular matching pattern, case insensitive;
For example, if we want to sort by query time, we can look at the first five SQL statements like this:
mysqldumpslow -s t -t 5 /var/lib/mysql/atguigu01-slow.log
Copy the code
[root@bogon ~]# mysqldumpslow -s t -t 5 /var/lib/mysql/atguigu01-slow.log Reading mysql slow query log from /var/lib/mysql/atguigu01-slow.log Count: 1 Time= 0s (2s) Lock= 0s (0s) Rows= 0 (13), root[root]@localhost SELECT * FROM student WHERE name = 'Count' : 1 Time= 0s (2s) Lock= 0s (0s) Rows= 0 (2), root[root]@localhost SELECT * FROM student WHERE stuno = N Died at /usr/bin/mysqldumpslow line 162, <> chunk 2.Copy the code
Common reference for work
:
SQL mysqldumpslow -s r -t 10 /var/lib/mysql.atguigu-slow. log Mysqldumpslow -s t -t 10 -g "left join" mysqldumpslow -s t -t 10 The/var/lib/mysql/atguigu - missile. The log # additionally suggested when using these commands | and use more, Otherwise likely critical screen situation mysqldumpslow - s r -t 10 / var/lib/mysql/atguigu - missile. The log | moreCopy the code
4.6 Disabling Slow Query Logs
The MySQL server can disable the slow log query function in two ways:
4.6.1 Mode 1: Permanent mode
Slow_query_log =OFFCopy the code
Alternatively, comment out or delete the slow_query_log entry
[mysqld]
#slow_query_log =OFF
Copy the code
Restart the MySQL service and run the following command to query the slow log function:
SHOW VARIABLES LIKE '%slow%'; SHOW VARIABLES LIKE '%long_query_time%'; Query timeout periodCopy the code
4.6.2 Mode 2: Temporary mode
Use the SET statement to SET. (1) Disable the slow log query function of MySQL. The SQL statement is as follows:
SET GLOBAL slow_query_log=off;
Copy the code
(2) Restart the MySQL service and run the SHOW statement to query the slow query log function
SHOW VARIABLES LIKE '%slow%'; # SHOW VARIABLES LIKE '%long_query_time%';Copy the code
4.7 Deleting Slow Query Logs
Run the SHOW statement to display slow query logs. The SQL statement is as follows:
SHOW VARIABLES LIKE 'slow_query_log%';Copy the code
According to the result, the default directory for storing slow query logs is the MySQL data directory. You can manually delete the slow query log files in this directory.
Run the mysqladmin flush-logs command to regenerate query log files. After the command is executed, a slow query log file is generated in the data directory.
mysqladmin -uroot -p flush-logs slow
Copy the code
Slow query logs are deleted and rebuilt using mysqladmin flush-logs. Note that once you run this command, the slow query logs are only stored in the new log file. If you need the old query logs, you must back them up in advance.
5. Run the SHOW PROFILE command to view the SQL execution cost
Show Profile is a tool provided by MySQL that can be used to analyze what SQL is doing in the current session and the resource consumption of execution. It can be used to measure SQL tuning. By default, it is disabled and the results of the last 15 runs are saved.
We can turn this on at the session level
show variables like 'profiling';
Copy the code
Turn ON show profile by setting profiling=’ON ‘:
mysql > set profiling = 'ON';
Copy the code
The associated query statement is then executed. To see which profiles are available for the current session, use the following command:
show profiles;
Copy the code
You can see that there are two queries in the current session. If we want to see the cost of the last query, we can use:
show profile;
Copy the code
show profile cpu,block io for query 2;
Copy the code
Common query parameters of show profile are as follows:
- ALL: displays ALL cost information.
- BLOCK IO: displays the BLOCK I/O overhead.
- CONTEXT SWITCHES: Cost of CONTEXT switching.
- CPU: displays the CPU cost.
- IPC: displays the sending and receiving cost information.
- MEMORY: displays the MEMORY cost.
- PAGE FAULTS: Displays PAGE error overhead information.
- SOURCE: displays the cost information associated with Source_function, Source_file, and Source_line.
- SWAPS: Displays information about the cost of the number of SWAPS.
5.1 Conclusions for daily development
converting HEAP to MyISAM
: The query result is too large, the memory is not enough, and the data is moved to disk.Creating tmp table
: Creates temporary tables. Copy data to temporary tables and delete temporary tables when all temporary tables are used up.Copying to tmp table on disk
Copy temporary tables from memory to disk.locked
If any of the above four results occur in the Show Profile diagnostics, the SQL statement needs to be optimized.
Note: However, the SHOW PROFILE command is deprecated and can be viewed from the Profiling table in information_SCHEMA.
This article is not finished, and there is another article to follow. Stay tuned.
Refer to the article
Chapter 15 and 16 “MySQL Technology Insider: InnoDB Storage Engine (version 2)” “Database index Design and Optimization”