What is slow SQL?

As the name implies, the SQL that runs for a long time is slow. How long is slow? SQL running speed is a relative concept. Different business scenarios have different requirements for slow SQL. The long_query_time parameter in MySQL defines the SQL running threshold. The default value is 10s. You can adjust the baseline by setting this threshold. The long_query_time parameter is invalid for long connections.

// Set session level threshold: set session long_query_time = 1; // Set global long_query_time = 1;Copy the code

Reference: Set slow SQL in the production environment to 0.1s to 0.2s.

How does slow SQL happen?

Why are SQL queries slow? Let’s take a look at the SQL execution process:

As shown in the figure above, an SQL statement is executed in the following steps:

  1. If the query cache is enabled, the system preferentially queries the cache. If the query cache is matched, the system directly returns the result to the client.
  2. If the cache is not matched, MySQL needs to find out what the statement needs to do, and uses the analyzer to perform lexical and syntax analysis.
  3. After figuring out what to do, MySQL needs to optimize the execution plan through the optimizer.
  4. Finally, the executor interacts with the interface provided by the storage engine to return the result to the client.

During MySQL execution, the optimizer uses cost calculations to estimate the most efficient way to execute SQL so that the executor can execute SQL with maximum efficiency. The basic cost estimation dimension is: I/O cost +CPU cost.

  • I/O Cost: The cost of reading data from disk to memory, with a cost constant of 1.0.
  • CPU cost: The cost of reading data from disk and processing the data in memory. The cost constant is 0.2.

How do you calculate the cost?

  1. Based on the search criteria, find out all possible indexes, namely explain possible_keys.
  2. Calculate the cost of a full table scan.
  3. Calculate the cost of executing queries using different indexes.
  4. Compare the cost of various execution options, the one with the least cost.

Cost calculation of full table scan: I/O cost +CPU cost

  1. I/O cost = Number of pages * 1.0(cost constant) + 1.1(spinners)
  2. CPU cost = rows of data * 0.2(cost constant) + 1.0(trimmer)

Index query costing: I/O cost +CPU cost + loopback I/O cost + loopback CPU cost

  1. I/O cost = scan interval * 1.0(cost constant)
  2. CPU cost = rows of data * 0.2(cost constant) + 0.01(trimmer)
  3. Table BACK I/O cost = scan interval * 1.0(cost constant)
  4. Back table CPU cost = number of rows * 0.2(cost constant)

Note: When reading an index scan, the IO cost of reading a scan interval or range is the same as the IO cost of reading a page, which is 1.0.

Based on the above two dimensions, we can find that the key factors affecting the efficiency of SQL execution are the I/O overhead and CPU overhead. What factors affect both of these? The following points can be summarized:

  1. I/O costs
    • Data volume: A larger amount of data requires more I/ OS.
    • Where data is read from: from cache or from disk; Whether to quickly find by index;
  2. CPU cost
    • Data processing: sorting, sub-query, etc., need to take the data to the temporary table first, and then data processing.

How to find slow SQL?

All types of activities that affect MySQL are recorded in log files. Common log files include:

  • Error log: Records the problems when MySQL starts, runs, and shuts down.
  • Binary log: Records all changes made to the MySQL database.
  • Slow query log: Records all SQL statements whose running time exceeds the long_query_time threshold.
  • Query log: Records all requests to the MySQL database.

You can view slow SQL by slowly querying logs. By default, the MySQL database does not enable slow query logs. You need to manually set the parameter to ON.

Slow log configuration operations:

  1. To view the current slow log configuration:show variables like 'slow_query_log%'

As shown above:

  • Slow_quer_log: The switch is ON, that is, enabled.
  • Slow_query_log_file: address for writing slow log files.
  • Slow_query_log_always_write_time: defines how long logs are written after the query ends.
  1. Enable slow log:set global slow_query_log=1;The connection takes effect after you exit.

How to optimize slow SQL?

Common Analysis Ideas

explain

MySQL provides an Explain command that analyzes select statements and outputs details of select execution.

Method of use

explain select * from user where id > 1;
Copy the code

As shown above:

The column name meaning
select_type Select clause type
partitions Matched partitions
type Access types, i.e. how to find rows of data (ALL, index, range, ref, eq_ref, const, system, NULL)
possible_keys A usable index
key Predict the index used
key_len The number of bytes used by the index
ref Connection matching condition
rows Estimate the number of rows of data found
filtered Estimated percentage of the number of rows filtered by the condition. The value ranges from 1 to 100. 100 indicates that no filtering is done
Extra This column contains details about how MySQL solves queries

Using the Explain command, we can analyze some common causes of slow SQL:

  • Index use problem, passPossible_keys (Usable index) 和 Key (actual index used)Two fields to view:
    • No index used
    • The optimizer selected the wrong index
    • No overwrite index is implemented
  • I/O overhead problem, passRows (number of rows traversed to execute the current query)andFiltered (ratio of valid rows to scanned rows)Fields to view:
    • Too many rows scanned
    • Returns useless columns with significant I/O performance overhead (such as text, BLOb, JSON, etc.)

profile

Explain can only analyze estimated SQL execution plans, but cannot analyze the actual SQL execution time. You can configure the profiling parameter to analyze SQL execution. After parameter is enabled, subsequent SQL statements will record their resource costs, including IO, context switch, CPU, Memory and so on. We can further analyze the bottleneck of current slow SQL according to these costs and further optimize it.

Method of use

  1. View system variables to check whether the function is enabled:show variables like '%profile%';
  2. Set system variables and turn on the switch:set profiling=1;
  3. Execute SQL statements.
  4. View profiles:show profiles;

5. Query details about the SQL statement overhead with the specified ID:show profile for query id; 6. Off switch:set profiling=0

You can run the show profile all for query ID command to view the elapsed time of each phase of SQL statement execution. Command to view execution of the complete column.

The column name meaning
“Status” Execution phase
“Duration” The duration of the
“CPU_user” CPU user
“CPU_system” CPU system
“Context_voluntary” Active context switching
“Context_involuntary” Passive context switching
“Block_ops_in” Blocking input operations
“Block_ops_out” Blocked output operation
“Messages_sent” The message sent
“Messages_received” The message received
“Page_faults_major” Primary paging error
“Page_faults_minor” Subpaging error
“Swaps” Switching frequency
“Source_function” The source function
“Source_file” The source file
“Source_line” Source line

Optimizer Trace

Profile can only view the execution time of SQL, but not the actual execution process of SQL. Optimizer Trace is a tracing function that traces the entire process of parsing and optimizing statements. You can enable the function to analyze statements. As mentioned above, explain can only judge the execution plan of SQL estimation. In estimation, cost calculation is carried out according to the cost model to compare the theoretical optimal execution plan, but in the actual process, estimation does not represent complete correctness, so we need to see whether each link in the execution process is really accurate or not through tracking.

Method of use

  1. View system variable information:show variables like '%optimizer_trace%';
  2. Turn on optimizer Trace:set optimizer_trace="enabled=on";
  3. Execute the required SQL statement.
  4. View the trace results in the information_schema.optimizer_trace table:select * from INFORMATION_SCHEMA.OPTIMIZER_TRACE;

You can view and analyze its execution tree:

  • Join_preparation: Preparation stage
  • Join_optimization: Analysis phase
  • Join_execution: Execution stage

5. Switch off:set optimizer_trace="enabled=off";