Recently, I encountered a [performance] problem in implementing a list query function. Because I did not have a deep understanding and practice of back-end performance, I spent a long time after discovering the problem. I finally found and solved the problem by consulting documents and using analysis tools.
This article records the process of solving the problem and the new knowledge I have learned. If there is any misunderstanding, please help me point it out.
Execution environment: Postgres 9.6.0, Sequelize 4.44.0
The problem background
The functionality I want to implement is to provide a back-end list query interface that involves three tables in the database, namely UserSurveys, Surveys, and Questionnaires. The entity relationship is 1:1:1.
The query interface starts with User_Surveys, which associates the Surveys table with the foreign key survey_id, and the Questionnaires table is associated with the foreign key quesitonnaire_id.
This interface needs to support filtering final returned UserSurveys entries based on Questionnaire. Name. Simple requirement, done many times before, I wrote the following code without thinking and committed it quickly (with irrelevant code logic hidden), using Sequelize (a Node.js ORM library) in the project.
await UserSurvey.findAndCountAll({
offset,
limit,
where: {
status,
},
include: [{model: Survey,
required: true.include: [{model: Questionnaire,
required: true.where: {
name: {
[Op.like]: ` %${questoinnaire_name}% `}}}}]}],order: [['created_at'.'DESC']]});Copy the code
When I pass in querstionnaire_name, the request is very slow!
How slow? Search [questionnaire] request completion time is [14.89s]!! Simply refresh and retry a few times to make sure the problem is coming from the back-end interface.
Follow the request route back to the back-end code to make sure that nothing I wrote was wrong. That’s the problem with SQL execution, right? Find the SQL statement that was finally executed in the log file.
EXPLAIN ANALYZE SELECT "survey->questionnaire"."id"
AS "survey.questionnaire.id", "survey->questionnaire"."name" AS "survey.questionnaire.name" FROM ("user_surveys" AS "user_survey"
INNER JOIN "surveys" AS "survey" ON "user_survey"."survey_id" = "survey"."id"
INNER JOIN "questionnaires" AS "survey->questionnaire" ON "survey"."questionnaire_id" = "survey->questionnaire"."id"
AND "survey->questionnaire"."name" like '% questionnaire %') where "user_survey".status = 'PUBLISH' ORDER BY"user_survey"."created_at" DESC LIMIT 10 OFFSET 0;
Copy the code
Inner Join, Sequelize (Inner Join, Sequelize);
EXPLAIN command
Before Postgres executes the SQL statement, it analyzes the Query conditions, data attributes, and table structure to select an optimal Query Plan. The Query Plan describes how Postgres will scan the database, join tables, run costs, and so on.
By reading and understanding this information, developers can better understand how the query works.
Using the EXPLAIN command is as simple as adding EXPLAIN in front of the original statement. Take the example from official documentation.
EXPLAIN SELECT * FROM tenk1;
QUERY PLAN
-------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00.458.00. rows=10000 width=244)
Copy the code
Execute EXPLAIN SELECT * FROM tenk1; Tenk1 table will be traversed by Seq Scan, cost=0.00.. 458.0 represents the estimated startup cost of 0.00 and the total cost of returning all records of 458.00. A total of 10000 rows will be returned, each row occupying 244 bytes of memory.
The EXPLAIN command gives estimated execution plan data and does not actually run the statement. For more realistic execution data, use the ANALYZE parameter.
EXPLAIN ANALYZE SELECT * FROM tenk1;
QUERY PLAN
--------------------------------------------------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00.458.00. rows=10000 width=244) (actual time=0.128.. 0377. rows=10000 loops=1)
Planning time: 0.181 ms
Execution time: 0.501 ms
Copy the code
With the ANALYZE parameter, you can see the total number of times the operation was executed (loops), while actual time and rows are the average of each Execution and the final Execution time of the entire statement.
“Cost” is an arbitrary unit, usually concerned with its relative value. And “actual time” is measured in milliseconds of real time.
Use ANALYZE. If you need to perform UPDATE, INSERT, DELETE, etc., be careful.
BEGIN; EXPLAIN ANALYZE UPDATE ... ;ROLLBACK; or COMMIT;
Copy the code
Read the Query Plan
The Query Plan returns a tree structure executed in child-first order, and the overhead of a upper-level node includes the overhead of all its children.
Here’s an example:
EXPLAIN ANALYZE SELECT *
FROM tenk1 t1, tenk2 t2
WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2
ORDER BY t1.fivethous;
Copy the code
Query Plan returned
------------------------------------------------------------------------------------------------------------------------ --------------------
Sort (cost=717.34.717.59. rows=101 width=488) (actual time=7.761.7.774. rows=100 loops=1)
Sort Key: t1.fivethous
Sort Method: quicksort Memory: 77kB
-> Hash Join (cost=230.47.713.98. rows=101 width=488) (actual time=0.711.7.427. rows=100 loops=1)
Hash Cond: (t2.unique2 = t1.unique2)
-> Seq Scan on tenk2 t2 (cost=0.00.445.00. rows=10000 width=244) (actual time=0.007.2.583. rows=10000 loops=1)
-> Hash (cost=229.20.229.20. rows=101 width=244) (actual time=0.659.. 0659. rows=100 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 28kB
-> Bitmap Heap Scan on tenk1 t1 (cost=5.07.229.20. rows=101 width=244) (actual time=0.080.. 0526. rows=100 loops=1)
Recheck Cond: (unique1 < 100)
-> Bitmap Index Scan on tenk1_unique1 (cost=0.00.. 5. 04 rows=101 width=0) (actual time=0.049.. 0049. rows=100 loops=1)
Index Cond: (unique1 < 100)
Planning time: 0.194 ms
Execution time: 8.008 ms
Copy the code
-> Mark the tree nodes, temporarily remove the overhead information, and extract the trunk structure as follows.
Sort
Sort Key: t1.fivethous
Sort Method: quicksort Memory: 77KB └ ─ ─ the HashJoin
Hash Cond: (t2.unique2 =├── Bitmap ├─ Bitmap ├─ Bitmap ├─ Bitmap< 100)
Copy the code
Postgres uses Seq Scan to read tenk2 and Bitmap Heap Scan and Bitmap Index Scan to Scan tenk1. The entries that meet the criteria (unique1 < 100) are read into the memory Hash.
Then use Hash Join to Join the scan results of the child nodes based on the Hash Cond: (t2.unique2 = T1.unique2).
Finally, quicksort is used to sort the query results based on t1.fivethous field.
What is an Index?
The Index in the database is just like the keyword search in the book, which is arranged in character order. When we want to see the position of a keyword in the book, we can quickly locate the specific page through the Index and directly turn to the corresponding page, rather than starting from the first page to search for the keyword page by page.
The database is divided into ClusterIndex and NonClusterIndex. A ClusterIndex Table has only one and is created by the database based on the unique primary key of the Table. A NonClusterIndex Table can have many tables and can be maintained by users freely.
When we run Create Index in DB, we Create a NonClusterIndex. The database maintains a structure to record the relationship between the ClusterIndex and NonClusterIndex.
Sort by NonClusterIndex, and then use the index to quickly locate the item you want to find.
A note of caution though:
- While maintaining the index itself is costly, it is often necessary to adjust the index accordingly when the contents of the database change.
- There is also a cost to using indexes, and the order in which they are used causes database read headers to jump between rows, which is much more expensive than sequential reads.
Seq Scan, Index Scan, Bitmap Heap Scan
The planner may use different ways of scanning nodes when executing different queries.
EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 7000;
QUERY PLAN
------------------------------------------------------------
Seq Scan on tenk1 (cost=0.00.483.00. rows=7001 width=244)
Filter: (unique1 < 7000)
Copy the code
In the example above, the planner will use Seq Scan to Scan each row of the TENk1 table and Filter the entries that meet the criteria according to the Filter criteria.
In other cases, the planner will choose to use Index Scan to look up the Index criteria directly.
EXPLAIN SELECT * FROM tenk1 WHERE unique1 = 42;
QUERY PLAN
-----------------------------------------------------------------------------
Index Scan using tenk1_unique1 on tenk1 (cost=0.29.8.30. rows=1 width=244)
Index Cond: (unique1 = 42)
Copy the code
In this type of planning, the rows of the table are fetched in index order, which means that the database read head needs to jump back and forth before the rows, which can be very expensive in the case of large data volumes.
There is another type, Bitmap Heap Scan
EXPLAIN SELECT * FROM tenk1 WHERE unique1 < 100 AND stringu1 = 'xxx';
QUERY PLAN
------------------------------------------------------------------------------
Bitmap Heap Scan on tenk1 (cost=5.04.229.43. rows=1 width=244)
Recheck Cond: (unique1 < 100)
Filter: (stringu1 = 'xxx'::name)
-> Bitmap Index Scan on tenk1_unique1 (cost=0.00.. 5. 04 rows=101 width=0)
Index Cond: (unique1 < 100)
Copy the code
Bitmap Heap Scan is a two-step process. First, the Bitmap Index Scan is used to fetch the matched rows from the table, and then the upper-layer planning nodes sort them by physical location before reading them, which minimizes overhead.
Let’s move on to table joins.
Nested loops, Hash Join, Merge Join
Nested loops work by looping data from one table and then accessing another table (usually with indexes). Each row in the drive table joins the corresponding record in the inner table, like a nested loop.
EXPLAIN SELECT *
FROM tenk1 t1, tenk2 t2
WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2;
QUERY PLAN
--------------------------------------------------------------------------------------
Nested Loop (cost=4.65.118.62. rows=10 width=488)
-> Bitmap Heap Scan on tenk1 t1 (cost=4.36.. 3947. rows=10 width=244)
Recheck Cond: (unique1 < 10)
-> Bitmap Index Scan on tenk1_unique1 (cost=0.00.4.36. rows=10 width=0)
Index Cond: (unique1 < 10)
-> Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.29.7.91. rows=1 width=244)
Index Cond: (unique2 = t1.unique2)
Copy the code
In this example, the planner uses a Nested Loop table join and scans the TENk1 table using the Bitmap Heap Scan to obtain 10 rows of data. The Nested Loop then performs an internal scan for each row of data obtained, matching the conditional (Unique2 = T1.unique2) data connection. The running cost is about [outer scan cost 39.27] + [inner scan cost 10 * 7.91] + one CPU calculation time.
Using JS code description, the process is roughly as follows, for each row of data in TENk1, a lookup is performed in tenk2, if there is no index, the time complexity is O(MN).
for (const t1 of tenk1) {
for (const t2 of tenk2) {
if (t1.unique2 === t2.unique2) {
t1.tenk2 = t2;
break; }}}Copy the code
If you modify the query criteria slightly, you might get another connection
EXPLAIN SELECT *
FROM tenk1 t1, tenk2 t2
WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2;
QUERY PLAN
------------------------------------------------------------------------------------------
Hash Join (cost=230.47.713.98. rows=101 width=488)
Hash Cond: (t2.unique2 = t1.unique2)
-> Seq Scan on tenk2 t2 (cost=0.00.445.00. rows=10000 width=244)
-> Hash (cost=229.20.229.20. rows=101 width=244)
-> Bitmap Heap Scan on tenk1 t1 (cost=5.07.229.20. rows=101 width=244)
Recheck Cond: (unique1 < 100)
-> Bitmap Index Scan on tenk1_unique1 (cost=0.00.. 5. 04 rows=101 width=0)
Index Cond: (unique1 < 100)
Copy the code
In this example, the planner selects hash joins, and TENk1 (usually smaller tables) sweeps into the in-memory hash table through the Bitmap Heap Scan, and hashes each row with the results of the TENK2 Scan.
Using JS code to describe the code execution process is roughly as follows, and the time complexity of this method is O(N + M).
const dict = {};
for (const t1 of tenk1) {
dict[t1.unique2] = t1;
}
for (const t2 of tenk2) {
t2['tenk2'] = dict[t2.unique2]
}
Copy the code
There is also a Merge Join, which requires that the Join keys in the input be in order.
EXPLAIN SELECT *
FROM tenk1 t1, onek t2
WHERE t1.unique1 < 100 AND t1.unique2 = t2.unique2;
QUERY PLAN
------------------------------------------------------------------------------------------
Merge Join (cost=198.11.268.19. rows=10 width=488)
Merge Cond: (t1.unique2 = t2.unique2)
-> Index Scan using tenk1_unique2 on tenk1 t1 (cost=0.29.656.28. rows=101 width=244)
Filter: (unique1 < 100)
-> Sort (cost=197.83.200.33. rows=1000 width=244)
Sort Key: t2.unique2
-> Seq Scan on onek t2 (cost=0.00.148.00. rows=1000 width=244)
Copy the code
For sorting large numbers of rows, a sequential scan plus sort is usually more efficient than an index scan, which requires discontinuous disk access.
Location problem
Once you know how to use the EXPLAIN command, go back to the original question to see why my query statement went wrong.
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------ ---------------------------------------------
Limit (cost=256.52.256.52. rows=1 width=540) (actual time=7257.566.7257.568. rows=10 loops=1)
-> Sort (cost=256.52.256.52. rows=1 width=540) (actual time=7257.564.7257.566. rows=10 loops=1)
Sort Key: user_survey.created_at DESC
Sort Method: top-N heapsort Memory: 26kB
-> Nested Loop (cost=26.88.256.51. rows=1 width=540) (actual time=156.527.7257.419. rows=196 loops=1)
-> Nested Loop (cost=0.00.221.74. rows=1 width=548) (actual time=0.957.7057.564. rows=1647 loops=1)
Join Filter: (survey.questionnaire_id = "survey->questionnaire".id)
Rows Removed by Join Filter: 6539694
-> Seq Scan on questionnaires "survey->questionnaire" (cost=0.00.130.07. rows=1 width=532) (actual time=0.009.1.591. rows=3441 loops=1)
Filter: ((name)::text ~~ '% questionnaire %'::text)
Rows Removed by Filter: 176
-> Seq Scan on surveys survey (cost=0.00.67.96. rows=1896 width=32) (actual time=0.001.1.247. rows=1901 loops=3441)
-> Bitmap Heap Scan on user_surveys user_survey (cost=26.88.34.75. rows=2 width=24) (actual time=0.121.. 0121. rows=0 loops=1647)
Recheck Cond: (((status)::text = 'PUBLISH'::text) AND (survey_id = survey.id))
Heap Blocks: exact=134
-> BitmapAnd (cost=26.88.26.88. rows=2 width=0) (actual time=0.120.. 0120. rows=0 loops=1647)
-> Bitmap Index Scan on user_surveys_status_index (cost=0.00.10.23. rows=242 width=0) (actual time=0.064.. 0064. rows=310 loops=1647)
Index Cond: ((status)::text = 'PUBLISH'::text)
-> Bitmap Index Scan on user_surveys_survey_id_index (cost=0.00.16.33. rows=1753 width=0) (actual time=0.055.. 0055. rows=72 loops=1647)
Index Cond: (survey_id = survey.id)
Planning time: 1.434 ms
Execution time: 7257.672 ms
(22 rows)
Copy the code
As you can see from the Query Plan, this Query takes 7.2 seconds to execute! What’s the problem?
Nested Loop (cost=0.00.221.74. rows=1 width=548) (actual time=0.957.7057.564. rows=1647 loops=1)
Join Filter: (survey.questionnaire_id = "survey->questionnaire".id)
Rows Removed by Join Filter: 6539694
-> Seq Scan on questionnaires "survey->questionnaire" (cost=0.00.130.07. rows=1 width=532) (actual time=0.009.1.591. rows=3441 loops=1)
Filter: ((name)::text ~~ '% questionnaire %'::text)
Rows Removed by Filter: 176
-> Seq Scan on surveys survey (cost=0.00.67.96. rows=1896 width=32) (actual time=0.001.1.247. rows=1901 loops=3441)
Copy the code
The report shows that the planner joins Surveys and Questionnaires by using a Nested Loop, in which outer Questionnaires are accessed to obtain 3441 rows of data, and inner Questionnaires continue to be accessed sequentially. Execute 3441 times. As mentioned earlier, the worst time complexity for this approach is O(MN).
The optimal solution is to give the planner a better alternative to the poor performance of table joins, by indexing surveys. Questionnaire_id.
CREATE INDEX surveys_questionnaire_id_index ON surveys (questionnaire_id);
Copy the code
Execute EXPLAIN ANYLYZE again and you can see that the planner is using our added indexes. Table join after indexed retrieval is significantly faster and the total query time is reduced from 7.2s to 160ms.
. Nested Loop (cost=4.34.156.90. rows=1 width=548) (actual time=0.030.7.091. rows=1647 loops=1)
-> Seq Scan on questionnaires "survey->questionnaire" (cost=0.00.130.07. rows=1 width=532) (actual time=0.008.. 0839. rows=3441 loops=1)
Filter: ((name)::text ~~ '% questionnaire %'::text)
Rows Removed by Filter: 176
-> Bitmap Heap Scan on surveys survey (cost=4.34.26.74. rows=8 width=32) (actual time=0.001.. 0001. rows=0 loops=3441)
Recheck Cond: (questionnaire_id = "survey->questionnaire".id)
Heap Blocks: exact=281
-> Bitmap Index Scan on surveys_questionnaire_id_index (cost=0.00.4.34. rows=8 width=0) (actual time=0.001.. 0001. rows=0 loops=3441)
Index Cond: (questionnaire_id = "survey->questionnaire".id)
...
Planning time: 0.692 ms
Execution time: 160.381 ms
Copy the code
The Bitmap Heap Scan on user_surveys execution time is 1647 * 0.085ms when the outer Nested Loop table is joined.
Nested Loop (cost=31.20.191.79. rows=1 width=540) (actual time=0.971.148.585. rows=196 loops=1)
-> Nested Loop (cost=4.34.157..03 rows=1 width=548) (actual time=0.024.6.963. rows=1647 loops=1)...-> Bitmap Heap Scan on user_surveys user_survey (cost=26.86.34.74. rows=2 width=24) (actual time=0.085.. 0085. rows=0 loops=1647)
Recheck Cond: (((status)::text = 'PUBLISH'::text) AND (survey_id = survey.id))
Heap Blocks: exact=134
-> BitmapAnd (cost=26.86.26.86. rows=2 width=0) (actual time=0.085.. 0085. rows=0 loops=1647)
-> Bitmap Index Scan on user_surveys_status_index (actual time=0.015.. 0015. rows=310 loops=1647)
Index Cond: ((status)::text = 'PUBLISH'::text)
-> Bitmap Index Scan on user_surveys_survey_id_index (actual time=0.069.. 0069. rows=72 loops=1647)
Index Cond: (survey_id = survey.id)
Copy the code
The main time was spent scanning two indexes, which took 0.085ms
BitmapAnd (cost=26.86.26.86. rows=2 width=0) (actual time=0.085.. 0085. rows=0 loops=1647)
-> Bitmap Index Scan on user_surveys_status_index (actual time=0.015.. 0015. rows=310 loops=1647)
Index Cond: ((status)::text = 'PUBLISH'::text)
-> Bitmap Index Scan on user_surveys_survey_id_index (actual time=0.069.. 0069. rows=72 loops=1647)
Index Cond: (survey_id = survey.id)
Copy the code
Again, add a multi-column index
CREATE INDEX user_surveys_status_survey_id_index ON user_surveys (status, survey_id);
Copy the code
Multi-column index columns are ordered from left to right in order to make better use of the index.
Execute EXPLAIN again
Limit (cost=163.81.163.82. rows=1 width=540) (actual time=10.097.10.099. rows=10 loops=1)
-> Sort (cost=163.81.163.82. rows=1 width=540) (actual time=10.096.10.096. rows=10 loops=1)
Sort Key: user_survey.created_at DESC
Sort Method: top-N heapsort Memory: 26kB
-> Nested Loop (cost=4.76.163.80. rows=1 width=540) (actual time=0.258.10.023. rows=196 loops=1)
-> Nested Loop (cost=4.34.157..03 rows=1 width=548) (actual time=0.027.6.812. rows=1647 loops=1)
-> Seq Scan on questionnaires "survey->questionnaire" (cost=0.00.130.21. rows=1 width=532) (actual time=0.008.. 0810. rows=3441 loops=1)
Filter: ((name)::text ~~ '% questionnaire %'::text)
Rows Removed by Filter: 176
-> Bitmap Heap Scan on surveys survey (cost=4.34.26.74. rows=8 width=32) (actual time=0.001.. 0001. rows=0 loops=3441)
Recheck Cond: (questionnaire_id = "survey->questionnaire".id)
Heap Blocks: exact=281
-> Bitmap Index Scan on surveys_questionnaire_id_index (cost=0.00.4.34. rows=8 width=0) (actual time=0.001.. 0001. rows=0 loops=3441)
Index Cond: (questionnaire_id = "survey->questionnaire".id)
-> Index Scan using user_surveys_status_survey_id_index on user_surveys user_survey (cost=0.42.6.75. rows=2 width=24) (actual time=0.002.. 0002. rows=0 loops=1647)
Index Cond: (((status)::text = 'PUBLISH'::text) AND (survey_id = survey.id))
Planning time: 0.622 ms
Execution time: 10.163 ms
Copy the code
The execution time is now down to 10ms, which is the time the planner takes to execute, plus data transfer to the client, ORM packet processing, network transfer, etc., to get closer to the user’s final perceived time on the front end.
Finally, on Chrome Network Panels, the time it takes to initiate a query from the front end has been reduced to a reasonable range (the front end Input component itself does 500ms Debounce).
conclusion
Through a complete study, master the scientific and effective method, later do not have to blindly guess whether to add index. Performance problems can also be resolved more quickly and efficiently.
Again, this article is my notes in the process of learning, because of the limited level, there may be inaccurate information, just for reference. For further study, please refer to the official documents.
Recommended reading
- www.postgresql.org/docs/9.4/us…
- Thoughtbot.com/blog/readin…
- www.postgresql.eu/events/pgda…
- Dataschool.com/sql-optimiz…
- explain.depesz.com/