The article directories

  • preface
  • A, InnoDB
  • Ii. InnoDB architecture Diagram
    • 1.Buffer Pool Function of the Buffer Pool
    • 2. Change the Buffer role
  • Redo Log
  • Four, when to fall dish
  • Why redo log
  • Write at the end, thanks for the likes

preface

In 2021, as a code farmer still do not understand the core of mysql, so you write SQL will have problems, and now generally do not need to write large SQL, stored procedures, so are based on the understanding of mysql core to write better SQL statements, and interview mysql but very important. There are a lot of knowledge points to introduce mysql, in fact, the first level must master is index, index I have written before, and how to apply the execution plan analysis SQL also summarized, feeling fuzzy can go to see the previous article, here is that if you understand the index, then you have to understand the storage engine. After all, indexing and storage engines go hand in hand

A, InnoDB

InnoDB is the first data storage engine in MySQL to provide foreign key constraints. In addition to providing transaction processing, InnoDB also supports row locking, providing consistent unlocked reads like Oracle, increasing the number of concurrent reads and improving performance without increasing the number of locks. Designed to maximize performance when handling large volumes of data, InnoDB has the most efficient CPU utilization of any other disk-based relational database engine.

InnoDB is a complete database system that sits behind MySQL. InnoDB has its own buffer pool, which buffers data and indexes. InnoDB also stores data and indexes in table Spaces, which may contain several files. Tables are stored in separate files, and the size of InnoDB tables is limited only by the size of operating system files, typically 2GB.

Ii. InnoDB architecture Diagram

InnoDB memory structure consists of two parts:

Buffer Pool and

Redo LogBuffer.

1.Buffer Pool Function of the Buffer Pool

BufferPool is an area of memory where Innodb caches data pages and index pages into the BufferPool when accessing tables. Innodb stores both primary and secondary indexes in disk space on a per-page basis.

When Innodb accesses data on a page, it loads the entire page into the buffer pool and then reads and writes it. After the operation is complete, the page buffer is not deleted from the buffer pool immediately. Instead, it is cached so that the next operation to access the page is directly fetched from the buffer pool. This saves disk I/O overhead and speeds up data access.

On MySQL dedicated servers, typically more than 75% of the physical memory is allocated to the buffer pool. The size of the buffer pool can be controlled with the innodb_buffer_pool_size parameter. Because the size of pages in the Buffer Pool is the same as the size of pages in the data file, both of which are 16K, and the operating system loads 4k data at a time, the surrounding data is generally taken out during reading to avoid repeated I/O. It is generally believed that if a data is searched, the surrounding data is also likely to be searched

LRU elimination strategy Buffer size is limited after all, not data has been stored in a buffer, so it involves how to efficient cleaning, eliminated the LRU algorithm came out, here introduces LRU (often use the list head recently, in the tail, often do not use memory deleted when the tail data), Normal LRU has many problems, so the LRU in mysql will make some changes, generally will optimize the LRU, not to mention redis, is also applied LRU.

LRU linked list (LRU) : The LRU linked list holds all the data pages and index pages loaded into the Buffer Pool. According to the least recently used principle, the most recently used items are listed at the head of the list and the least recently used items are listed at the bottom of the list. When there are no free pages in the Buffer Pool space, some cached pages are eliminated from the end of the LRU list. After the elimination, the LRU header holds the hot data.

LRU lists are not traditional LRU lists. InnoDB has optimized LRU lists by dividing them into yong (5/8) and Old (3/8) regions. This ratio can be adjusted by using the innodb_old_blocks_pct parameter. The default value is 37, representing 37% of the old block.

Young section: Stores cached pages that are used very frequently. This section of the linked list is also called hot data. Old section: Stores cached pages that are not used very often, so this section of the list is also called cold data.

Pages that are not used or not necessarily used can be loaded into the cache because the database prereads and scans the full table. If these pages are inserted directly into the head of the LRU list, the hot data will be pushed out to the tail of the LRU, and the pages that are used very frequently will be eliminated. The phenomenon of “bad money drives out good money” will occur. So when InnoDB reads a page into the buffer pool, it first inserts it into the head of the old list, and then inserts it into the head of the yong section the next time the page is visited if certain conditions are met. If a certain condition is met, the last page access time minus the first page access time < a certain interval. This interval can be controlled by the innodb_old_blocks_time parameter. The default value is 1s. That is, the page is inserted into the head of the yong section only when the interval between the first and last access to the page is less than 1s. Through this mechanism, the cache hit ratio reduction caused by the preread mechanism and full table scan is reduced, because unused preread pages and full table scan pages are only placed in the old region, not affecting the cached pages in the Young region.

When the Buffer Pool runs out of space, the Buffer Pool uses the LRU algorithm to weed out the least recently used pages. The Buffer Pool is divided into several cache pages during MySQL initialization. A Buffer Pool maintains multiple linked lists: Free Page Linked list (FREE) : Unused cached pages. Dirty Page List (Flush) : Cached pages that have changed.

As you can see from the architecture diagram, the Buffer Pool is also divided into two parts for Change Buffer and adaptive hash index. Buffer pools play a decisive role in improving read and write performance. When an insert operation is performed, if there are multiple indexes in a table, all index trees need to be updated. If the index page that needs to be updated exists in the Buffer Pool, it can be updated directly. However, if the index page is not in the Buffer Pool, it needs to be loaded from disk to memory first, and then the index page in memory can be operated. In addition to the possibility of page splitting, there will be more index pages that need to be updated, which will require multiple disk I/O operations, is there a way to optimize? InnoDB uses the Change Buffer mechanism.

Adaptive Hash Index For some hot index pages in the buffer pool, InnoDB will automatically create an adaptive Hash index in the buffer pool to improve the access speed of these pages. If innodb_adaptive_hash_index is enabled, it is enabled by default.

2. Change the Buffer role

The Change Buffer is a special data structure that optimizes updates to secondary (secondary) index pages. If the secondary index page is not in the Buffer Pool, InnoDB will temporarily cache changes to the secondary index in the Change Buffer. Later when the index page is loaded into the Buffer Pool for other reads, InnoDB will merge the changes into the index page. Changes to the Change Buffer cache can be caused by Insert, Delete, and Update operations, which can reduce random I/O for secondary indexes by merging operations. The use of Change Buffer can effectively improve the execution speed of INSERT, updTE and DELETE. Step: 1. DML operations on secondary index pages that are not in the Buffer Pool are stored in ChangeBuffer. 2. The next time the page is loaded, the index page is loaded into the Buffer Pool. Changes in the Change Buffer are merged into the Buffer Pool. 3. Later, when the server is idle, the change is flushed to disk.

When is change buffer updated to the index page of the buffer pool? Index pages merge updates in memory as they load into the cache pool. When are dirty pages in a buffer pool updated to disk files? Redo log is full. When the database is free, by the background thread; When the database is shut down. The change buffer is also updated to disk in all three cases.

Redo Log

InnoDB storage engine will first read data pages from disk to memory (Buffer pool) and then modify the data pages in the buffer pool, so that the data pages in the buffer pool are inconsistent with the data pages on disk. Call this page a dirty page.

If dirty pages are flushed to disk files immediately after each INSERT, update, or delete operation, an update operation may require several DISK I/OS, resulting in high I/O cost and low update efficiency. So MySQL caches the updates in memory and only chooses to flush dirty pages to disk when the server is idle. There is a problem, however, if the server unexpectedly shuts down or MySQL crashes before the dirty pages fall to the disk, the data of the dirty pages will be lost. To avoid this problem, InnoDB writes changes to pages to a log file and persists the changes to disk. So when MySQL crashes and restarts, MySQL will use the log file to perform recovery operations and reapply the changes to data files, making the update operation persistent. This Log file is called the Redo Log.

By default, the physical files corresponding to the redo log are located ib_logfile1 and ib_logfile2 in the database data directory.

You can control the number, number, and size of log files stored by using the following parameters. The whole process of this log and disk combination is actually write-ahead Logging, which is often mentioned in MySQL. The key point is to Write logs before writing to disk. If a redo log is written to each update, disk I/O costs are high, so MySQL has created a Buffer in memory to store the redo log. Its size can be controlled with the innodb_log_buffer_size parameter, which defaults to 16MB.

Show VARIABLES like ‘innodb_change_buffer_max_size; The default value is 25%, allowing you to change Settings without restarting the server. set GLOBAL innodb_change_buffer_max_size=25

The default path is./, indicating that the log file is in the database data directory. Innodb_log_group_home_dir =./ # Specifies the number of files in the redo log file group. The default is 2, indicating that there are two redo log files. Write two files in a loop. One file is full before the other file is used. Innodb_log_files_in_group =2 # Size of each redo log file, default 48M. innodb_log_file_size=16777216

The whole process of this log and disk combination is actually write-ahead Logging, which is often mentioned in MySQL. The key point is to Write logs before writing to disk. If a redo log is written to each update, disk I/O costs are high, so MySQL has created a Buffer in memory to store the redo log. Its size can be controlled with the innodb_log_buffer_size parameter, which defaults to 16MB.

Four, when to fall dish

The time when the Log Buffer is written to the disk is controlled by the innodb_flush_log_at_trx_COMMIT parameter. The default value is 1, which means that the disk is dropped immediately after the transaction is committed. When user programs write data to disk files, they need to call the interface of the operating system. The operating system itself has a buffer, and then rely on the operating system mechanism to flush the cache to disk files from time to time. A user program can perform fsync to flush data from the operating system buffer to a disk file.

0: MySQL writes data from the log buffer to the log file and fsync to disk once per second. Log buffer data is not immediately written to the redo log file every time a transaction commits. If MySQL crashes or the server goes down, all data in memory will be lost, and at most 1 second of transactions will be lost. 1: Each time a transaction commits, MySQL writes data from the log buffer to the log file and fsync to disk at the same time. This mode is the system default, MySQL crashes have committed transactions will not be lost, to fully comply with ACID, must use the default setting 1. 2: Each time a transaction commits, MySQL writes data from the log buffer to the log file. MySQL performs a fsync operation every second to synchronize data to disk. Each time a transaction is committed, the data is flushed to the operating system buffer, and the disk can be considered persistent. If MySQL crashes, the committed transaction will not be lost. But if the server goes down or unexpectedly outages, the data in the operating system cache can be lost, with up to 1 second of transactions lost.

Only setting it to 1 is the safest but performance-intensive way to really keep a transaction persistent, but since MySQL performs a refresh operation, fsync(), which blocks and does not return until it is done, we know that disk writes are slow, so MySQL performance drops significantly. 0 and 2 the best model, the performance of comprehensive security and performance considerations, frequently used in the business of this model, 2 in MySQL Often restart data is not lost, lost only in the server is down 1 second data, this probability is very low, relative to performance, this can be tolerated.

Why redo log

Log files are also disk files, so why not update them directly to the data file instead of the redo log first? If the data to be updated is scattered in different sectors on different pages when the transaction is submitted, the data can be written only when the corresponding track is found according to the disk address and then the corresponding sector is found. This time generally takes 10ms. Data submitted by a transaction requires multiple disk I/O interactions to complete, which is random I/O and slow to read and write. A redo log file is a contiguous area of the disk. When a redo log is written to a transaction, the first sector is found and the redo log is written backwards. In other words, only one disk I/O operation is performed. The dirty page falling disk is random I/O, and the log is sequential I/O. WAL is used to record the change operations in a log file first and delay the falling disk, improving system performance. Note that the redo log is primarily used for crash recovery. Updates to data files on disk are still from the dirty page drop disk in the buffer pool. Redo log features

  1. Redo logs are generated by the InnoDB storage engine layer.
  2. A redo log is a physical log that records what changes were made to data on a data page.
  3. The number and size of the redo log files are fixed. When the redo log file is full, it switches to the next file. You start with the first file, you go to the last file and you go back to the first file and you start the loop. When the system is idle or redolog is full, MySQL erases the redolog and synchronizes the changes to the data file.

Write at the end, thanks for the likes

Welcome to follow my wechat public account [Village of the Apes] \

To talk about Java interview and my wechat further communication and learning, wechat mobile search [codeYuanzhicunup] can be added if there are related technical problems welcome to leave a message to discuss, the public number is mainly used for technology sharing, including often meet test analysis, as well as source code interpretation, micro service framework, technical hot spots, etc..