Everybody is good! I am [programmer ring ring], this is my nuggets in the first piece of original content! Only personal summary, welcome to exchange!
The essence of MySQL optimization is as follows: 1. In combination with the main features of MySQL database (data storage and query), the storage space is smaller, the update and query speed is faster, the concurrency is higher, and the server resource utilization is higher. 2. In service scenarios with a large amount of data and high concurrency, an unreasonable design can prevent database breakdown, data loss, and service failure, ensuring service availability.
The following are some optimization suggestions summarized by me:
-
Select the InnoDB engine. InnoDB engine supports transactions, row-level locking, concurrency performance, and CPU and memory cache page optimization for higher resource utilization.
-
Use UTF8 character set, UTF-8MB4. No transcoding, no risk of garbled code, more space saving.
-
Reduce the use of stored procedures, views, triggers, etc., and try to apply the design of computing logic to the level of business code, so as to avoid the impact of time-consuming calculation on the concurrency of the database, or even bring down the database.
-
Use auto-increment primary keys. Innodb is based on B+ tree index organization table, using auto-increment primary key, data row write can improve insert performance, can avoid page splitting, reduce table fragmentation to improve space and memory usage.
-
Disallow the use of foreign keys, which cause coupling between tables. Update and delete operations involve associated tables, which can significantly affect SQL performance and can even cause deadlocks.
-
The TEXT and BLOB types are prohibited. More disk and memory space will be wasted, and large-field queries will clean up the cache of hot data, causing a sharp drop in cache hit ratio and affecting database performance. If necessary, create another table record to separate the hot data from the large field data.
-
Create appropriate indexes to avoid full table scans.
-
Do not create too many indexes. Keep them under 5. The essence of an index is to record the sequence of data. It takes up storage space and should not be excessive. You can use EXPLAIN to see index hits.
-
The principle of creating an index is to create an index for fields that are commonly used after a WHERE or Order Derby statement. However, it is not suitable for cases where data values change little, such as “gender” fields, which are usually “male/female/unknown”.
-
Indexes should not be created on fields that are frequently updated. The update operation changes the B+ tree and rebuilds the index, a performance-intensive process.
-
The establishment of the joint index should meet the “leftmost matching” principle, with the higher hit ratio of the field in the first.
-
The column value is set to the default value. Null values are prohibited to prevent index damage and query efficiency.
-
Not using a function or expression after a WHERE statement invalidates the index.
-
Do NOT use NOT IN and <> operations, % fuzzy query, avoid full table scan;
-
Avoid using attribute implicit conversions. For example, if phone=’123456789′ is a string, do not write phone=123456789.
-
Try to build overwrite index, avoid back table query.
-
Unique/normal index selection. Unique indexes need to be checked for uniqueness and cannot use the change Buffer feature. Data needs to be read from disks frequently, which involves random I/O access and reduces efficiency. So try not to use unique indexes in scenarios where normal indexes can be used.
-
Update condition columns using indexes to avoid table level locks that cannot be locked by index keys.
-
If the VARCHAR field is too long, use prefix indexes.
-
Disallow JOIN queries or subqueries on large tables. Temporary tables are generated, consuming memory and CPU, and greatly affecting database performance. Associated queries can be split into multiple single-table queries at the business code level.
-
Avoid storing large files. Large files can be saved to a specialized file system, such as OSS.
-
Determine the field type and length based on business requirements to avoid space waste.
-
Large transactions as far as possible into multiple small transactions, reduce lock record and time.
-
Try to place the longest transaction block at the end of the program.
-
Minimize filter criteria updates for range-based data retrieval and avoid the impact of gap locking that locks records that should not be locked.
-
Pay attention to gap locks. Batch update & delete operations should avoid concurrency to avoid accidental deadlocks.
-
For a business system with a large amount of data, divide the database into tables, divide the database into vertical tables according to the business system, and divide the table into horizontal/vertical tables if the data in a single table is too large.