1. Select the right storage engine: InnoDB
Unless your tables are used for read-only or full-text retrieval (no one uses MYSQL anymore when it comes to full-text retrieval), you should default to InnoDB. MyISAM is faster than InnoDB when you test it yourself. This is because MyISAM only caches indexes, while InnoDB caches data and indexes. MyISAM does not support transactions. But if you use Innodb_flush_log_at_trx_COMMIT = 2 you can get close read performance (a hundredfold difference).
1.1 How to convert the existing MyISAM database to InnoDB:
mysql -u [USER_NAME] -p -e “SHOW TABLES IN [DATABASE_NAME];” | tail -n +2 | xargs -I ‘{}’ echo “ALTER TABLE {} ENGINE=InnoDB;” > alter_table.sql
perl -p -i -e ‘s/(search_[a-z_]+ ENGINE=)InnoDB//1MyISAM/g’ alter_table.sql
mysql -u [USER_NAME] -p [DATABASE_NAME] < alter_table.sql
1.2 Create InnoDB files for each table:
Innodb_file_per_table =1 this ensures that the IBDATA1 file will not get out of control. In particular, mysqlcheck -o — all-databases is executed.
2. Ensure that data is read from the memory and stored in the memory
2.1 Innodb_buffer_pool_size large enough
It is recommended that all data be stored in Innodb_buffer_pool_size, that is, the capacity of Innodb_buffer_pool_size is planned by storage capacity. This way you can read data entirely from memory, minimizing disk operations.
2.1.1 How do I make sure innodb_buffer_POOL_size is large enough that data is read from memory and not disk?
Method 1
mysql> SHOW GLOBAL STATUS LIKE ‘innodb_buffer_pool_pages_%’; +———————————-+——–+ | Variable_name | Value | +———————————-+——–+ | Innodb_buffer_pool_pages_data | 129037 | | Innodb_buffer_pool_pages_dirty | 362 | | Innodb_buffer_pool_pages_flushed | 9998 | | Innodb_buffer_pool_pages_free | 0 | !!!!!!!! | Innodb_buffer_pool_pages_misc | 2035 | | Innodb_buffer_pool_pages_total | 131072 | + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — – + — — — — — — — — + 6 rows in the set (0.00 SEC) found Innodb_buffer_pool_pages_free is zero, Innodb_buffer_pool_size InnoDB’s buffer pool has been used up. innodb_additional_mem_pool_size = 1/200 of buffer_pool innodb_max_dirty_pages_pct 80%
Method 2
Alternatively, run the iostat -d -x -k 1 command to view disk operations.
2.1.2 Check whether the Server has sufficient memory for planning
Run echo 1 > /proc/sys/vm-drop_caches to clear the operating system file cache to see the actual memory usage.
2.2 Data Preheating
By default, innodb_buffer_Pool is cached only if a particular item is read once. Therefore, the database has just started and needs to be preheated to cache all data on the disk into memory. Data preheating can improve read speed. For InnoDB database, you can use the following methods to preheat data: 1. To save the following script for MakeSelectQueriesToLoad. SQL SELECT DISTINCT CONCAT (‘ SELECT ‘ndxcollist,’ FROM ‘, db, ‘. ‘, TB, ‘ ORDER BY ‘,ndxcollist,’; ‘) SelectQueryToLoadCache FROM ( SELECT engine,table_schema db,table_name tb, index_name,GROUP_CONCAT(column_name ORDER BY seq_in_index) ndxcollist FROM ( SELECT B.engine,A.table_schema,A.table_name, A.index_name,A.column_name,A.seq_in_index FROM information_schema.statistics A INNER JOIN ( SELECT engine,table_schema,table_name FROM information_schema.tables WHERE engine=’InnoDB’ ) B USING (table_schema,table_name) WHERE B.table_schema NOT IN (‘information_schema’,’mysql’) ORDER BY table_schema,table_name,index_name,seq_in_index ) A GROUP BY table_schema,table_name,index_name ) AA ORDER BY db,tb ; 2. Perform mysql – uroot – AN < / root/MakeSelectQueriesToLoad. SQL > / root/SelectQueriesToLoad. 3 SQL. Every time I restart the database, or a whole database backup before need to preheat time execution: mysql – uroot < / root/SelectQueriesToLoad. SQL > / dev/null 2 > &1
2.3 Do not store data in SWAP
If it is a dedicated MYSQL server, disable SWAP. If it is a shared server, make sure innodb_buffer_POOL_size is large enough. Or use a fixed memory space for caching, using the memlock instruction.
3. Periodically optimize and rebuild the database
Mysql > CREATE TABLE mydb.mytablenew LIKE mydb.mytable; mysql > CREATE TABLE mydb.mytablenew LIKE mydb.mytable; INSERT INTO mydb.mytablenew SELECT * FROM mydb.mytable; ALTER TABLE mydb.mytable RENAME mydb.mytablezap; ALTER TABLE mydb.mytablenew RENAME mydb.mytable; DROP TABLE mydb.mytablezap;
4. Reduce disk write operations
4.1 Use write cache large enough innodb_log_file_size
Note that if you use 1G innodb_log_file_size, if the server goes down, it will take 10 minutes to recover. The recommended value for innodb_log_file_size is 0.25 * Innodb_buffer_pool_size
4.2 innodb_flush_log_at_trx_commit
This option is closely related to disk write operations: Innodb_flush_log_at_trx_commit = 0/2 flush_log_at_trx_commit = 0/2 Either the infrastructure is secure enough, or the transactions are small enough that 0 or 2 can be used to reduce disk operations.
4.3 Avoid double-write buffering
innodb_flush_method=O_DIRECT
5. Improve the disk read/write speed
RAID0 Is especially important when using virtual disks (EBS) such as EC2.
6. Make full use of indexes
6.1 Viewing existing table structures and indexes
SHOW CREATE TABLE db1.tb1/G
6.2 Adding Necessary Indexes
Indexing is the only way to speed up queries, such as the inverted indexing used by search engines. Index additions need to be determined by queries, such as slow query logs or query logs, or by analyzing queries through EXPLAIN commands. ADD UNIQUE INDEX ADD INDEX
6.2.1 For example, to optimize the user authentication table:
ALTER TABLE users ADD UNIQUE INDEX username_ndx (username); ALTER TABLE users ADD UNIQUE INDEX username_password_ndx (username,password); Echo “select username,password from users;” > / var/lib/mysql/upcache SQL to add startup scripts to my. The CNF (mysqld) init – file = / var/lib/mysql/upcache SQL
6.2.2 Use automatic indexing framework or automatic splitting table structure framework
For example, a framework like Rails will automatically add indexes, and a framework like Drupal will automatically split the table structure. This will point you in the right direction early in your development. So it’s actually bad practice for less experienced people to start building from zero.
7. Analyze the query logs and slow query logs
Log all queries, which is useful in ORM systems or systems that generate queries. Log =/var/log/mysql.log Do not use it in the production environment, or it will occupy your disk space. Record the queries that take longer than one second: long_query_time=1 log-slow-queries=/var/log/mysql/log-slow-queries
8. The radical approach is to use memory disks
Infrastructure reliability is now so high that EC2, for example, has very little to worry about when server hardware goes down. And memory is really cheap, it is easy to buy dozens of GIGABytes of memory server, you can use memory disk, regular backup to the disk. Migrate the MYSQL directory to a 4G memory disk mkdir -p/MNT /ramdisk sudo mount -t TMPFS -o size=4000M TMPFS/MNT /ramdisk/ mv /var/lib/mysql /mnt/ramdisk/mysql ln -s /tmp/ramdisk/mysql /var/lib/mysql chown mysql:mysql mysql
9. Use MYSQL as NOSQL
B-tree is still one of the most efficient indexes and all MYSQL is still not obsolete. Skip the SQL parsing layer of MYSQL with HandlerSocket, and MYSQL truly becomes NOSQL.
Other 10.
LIMIT 1 is added at the end of a single query to stop full table scanning. Separating non-indexed data, such as storing large articles separately, does not affect other automated queries. Don’t use MYSQL’s built-in functions because they don’t build query caching. PHP is very fast to set up connections, so you don’t need to use the connection pool, otherwise you might run out of connections. Of course, PHP programs can run out of connections without using the connection pool for example @ignore_user_abort(TRUE); Use IP addresses instead of domain names for database paths to avoid DNS resolution problems
End of the 11.
You will find that the performance of the database will increase by several to several hundred times after optimization. So MySQL is basically applicable to most scenarios. The cost of optimizing an existing system is much lower than system refactoring or migrating to NOSQL.