This is the fourth day of my participation in the August More text Challenge. For details, see:August is more challenging

Examples from life

Have we seen many queries in companies with select * XXXX

  • The idea is, somebody else wrote select *, so let me do it, save a lot of trouble

The slow query

  • First, consider, at the most basic level, whether the database insert query we’re using is accessing too much data
  • Most low-performing queries can be optimized by reducing the amount of data accessed
  • This is because select * imposes additional I/O, memory, and CPU costs on the server

Three indicators of slow query overhead in the database

  • The corresponding time
  • Number of rows scanned
  • Number of rows returned

If you go off the index

select * from stu where id = 1;
Copy the code
  • The index will optimize the query to return only ten pieces of data
  • Without an ID index, hundreds or thousands of rows of data are expected to be accessed

Tips: Use the EXPLAIN command in mysql to check the number of rows corresponding to rows

  • In fact, the best case scenario is to filter out mismatched records through where in the storage engine layer
  • The next best thing is to overwrite the index hit scan and filter out the mismatched records at the server layer where without the need to query back to the table
  • The most common case is to return data from the data table and then filter records that do not meet the criteria

How to optimize

  1. Using index override, we put all the columns we need in the index, so we don’t have to go back to the table to query
  2. You can separate them out

3. Reconstruct queries (large queries can be dissolved into small queries)

  • For example, count, limit, Max ()

count

  • The best optimization for count is to add summary tables, because count inevitably scans a large number of rows

limit

  • Limit is often used when paging, as shown in the following code
Select id from stu order by id limit 1000,20;Copy the code
  • This statement will query 1020 entries and then discard the first 1,000 entries and return 20 entries between 1000 and 1020
  • So the best way to optimize is to go through the index, so that the LIMIT query becomes a query with a known location

Max minimum min&max

  • First, let’s imagine that if we go to the primary key index, then the first value of the query is the minimum value we want to return
  • We can also use limit to control the amount of data after the primary key index, thus achieving the effect of min() instead of min
select id 
from stu 
use index(primay) where address = 'bj' limit 1;
Copy the code
  • This scans as few records as possible

  • So let’s do a couple of problems at the end and just relax
Find ` Employee ` second highest Salary in the table (Salary) + - + -- -- -- -- -- -- -- -- + | | Id Salary | + - + -- -- -- -- -- -- -- -- + | 1 | 100 | | | 200 | 2 | 3 | 300 | + - + -- -- -- -- -- -- -- -- + the + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + | SecondHighestSalary | + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + | | 200 Select Max (Salary) SecondHighestSalary from Employee where Salary < (select Max (Salary) from Employee) Employee)Copy the code
Find all duplicate email addresses in the 'Person' table. + - + -- -- -- -- -- -- -- -- -- + | | | Id Email + - + -- -- -- -- -- -- -- -- -- + | 1 | [email protected] | | 2 | [email protected] | | 3 | [email protected] | + - + -- -- -- -- -- -- -- -- -- + result + -- -- -- -- -- -- -- -- -- + | Email | + -- -- -- -- -- -- -- -- -- + | [email protected] | + -- -- -- -- -- -- -- -- -- + answer select Email from Person group by Email having a count (Email) > = 2;Copy the code