1: The full flow of an UPDATE statement
Update t_user set name=’ id ‘where id=8
- First analyse, from you to write SQL to the database storage engines between what happened, first of all, you write the SQL statement total want to connect to the database, you can configure a bunch of database address, port, user name and password, but always someone responsible for mysql to connect, of course, you can use that for the JDBC connection, However, the current general use of database driven, these operations are encapsulated. If a thread initiates a connection to mysql and executes a statement, it cannot immediately destroy the connection. We all know that frequent creation and destruction costs performance, so there is also the concept of database connection pools, which are responsible for maintaining a certain amount of thread pools, so that your program needs to execute SQL statements, you take a connection from the pool, execute it and put it back. Mysql takes your SQL through the SQL interface, parses it, optimizes the query to find the optimal path, calls the storage engine with the executor, finds the data, modifs it, and returns the result.
A buffer pool is the buffer pool of innoDB’s storage engine. Remember, mysql’s data is ultimately stored on disk. However, the speed of random disk reads and writes is well known, so mysql has added a layer of caching. Instead of interacting directly with disks, mysql has modified the buffer pool cache. More on that in a minute. OS cache, in Linux, writing to a disk doesn’t actually interact with the disk right away. Instead, there is a layer of system cache. As you can see below, on my 8GB machine, I have nearly 4 gigabytes of system cache. I’ll talk about that in a minute.
- 1: Check whether the buffer pool contains the data. If not, read the data from the disk and load it into the buffer pool.
- 2: Write undo. Log to record the status before the modification. Rollback transactions need this record, otherwise who knows how to rollback
- 3: Modify the buffer pool data, change the name of the data whose ID =8 to Li Si
- 4. Write redore. log to record changes
- 5: Flush logs to disk (ready to commit transaction)
- 6: Records bin.log and writes data back to the disk
- 7: Record the name and address of bin.log to redore. log and execute the commit flag.
That’s pretty much it, but I don’t know if you noticed, but it looks like you’ve only changed the memory, not the disk file. In fact, there’s a background thread that keeps flushing the data back to the disk. However, there is a question, if in step 5, WHEN I commit transaction, mysql hangs, will the data be lost? This brings us to the issue of swipe strategy. The innodb_flush_log_at_trx_commit parameter is set to 0, which means that I don’t flush to disk even if I commit the transaction. Redolog: redolog: redolog: redolog: redolog: redolog: redolog: redolog: redolog: redolog: redolog: redolog: redolog: redolog If the value is set to 2, it will flush to the OS cache and may flush to disk after one second. You may be wondering, what is the point of simply brushing the disk, and what are the other parameters? You know, any brushing of the disk will definitely affect the efficiency and reduce the throughput, so you always have to give more choices. Some scenarios don’t need to be so strict. In addition to redolog, the binlog policy is the same as that of redolog. The default value is 0, and the value of oscache is 1. In fact, even if step 7 fails, if both logs are already in the disk, the reload will determine that both values are valid, and then add the commit flag. The transaction succeeds.
- So let’s talk about the macro, let’s talk about the micro.
Generally speaking, the default buffer pool is 128 MB. If your machine has more than 30 GB of memory, you can set the buffer pool to a larger value, such as 2 GB. If the cache is larger, the query speed will be faster. In a disk file, each of your tables can be treated as a file, or table space, divided into multiple extents. A extents group contains 256 extents, and each extents contains 64 consecutive pages. Each page contains 16K rows, which is called a single row. Basically, data pages are the smallest unit. When you perform an insert statement, you read a data page from disk and load it into the buffer pool, which is the cache page. By default, the byte buffer allocates a lot of empty cache pages, also 16K, and the free linked list. Flush lists (records which data has been modified), LRU lists (records hot cache pages, such as data pages that have just been loaded from disk at the end of the cold list, and if accessed again a second later, flusher the data to the hot header). Flush and LRU are both empty. After loading from disk to memory, it will find the address of the data page from the free list, put the data in, change the value, and then remove the data page from free, add lRU, add flush, By the way, record it in a hash table to show that the data is cached. Flush and LRU data will be flushed to disk by a thread periodically until more memory is available, at which point the page will be released, free will be added, and flush and LRU and hash will be removed.
- So what exactly is on the disk? Insert into the buffer pool (undolog) insert into the buffer pool (undolog) insert into the buffer pool (undolOG) insert into the buffer pool (undolOG) Do the reverse and write the redo log to memory. Then write the redo log to disk. Remember that a log is only a few bytes or even tens of bytes, which is too wasteful, so there is a concept of a redolog block, a redolog buffer (16 megabytes), which is essentially a cache page, and a collection of redologs, which normally flush to disk in four cases. Commit the transaction, the buffer is half way (which is 8M), every second, shut down mysql. Also, redolog is stateful, and there is no need to worry about redolog transaction rollback in case the data page is flushed to disk and changes the data. Redolog logs roughly the table space + data page + offset + modified field + modified value. Ok, so what would the table look like if the row in the cache page was flushed to disk by a timed thread? A table file is divided into multiple sets of data block, a data block has 256 data area, a data area has 64 pages, each data page has 16 k, the data page is put inside a pile of data lines, namely in mysql table to see those things, there are also some index directory, data before and after the page address, data pages maximum minimum value, and so on.