Personal blog post address: small drop class MySQL related interview questions summary
1. What is ACID in MySQL transactions?
Point of investigation: Four characteristics of a transaction
-
Atomicity:
- A transaction must be the smallest unit of work that is indivisibleThe entire transaction operation either succeeds or fails, usually passing
commit
androllback
To control;
- A transaction must be the smallest unit of work that is indivisibleThe entire transaction operation either succeeds or fails, usually passing
-
Consistency:
- Transactions must change the database from one consistent state to another. In layman’s terms,The result of the transaction must be as expected(Reference:What is transaction consistency?)
-
Isolation Isolation:
- One transaction is isolated from another. Changes made by one transaction are not visible to other transactions until the final commit is made. When one transaction operates on one data, other transactions are not allowed to operate on the same data.
-
Durability:
- Once a transaction commits, its changes are permanently stored in the database. Even if the system crashes, the modified data is not lost.
2. How much do you know about dirty reads, unreal reads, and unrepeatable reads in MySQL?
Point of investigation: the isolation level of the transaction and the associated issues that result
- Dirty read: If a transaction has not committed the data modification, other transactions can also read the data. As a result, one transaction can read the uncommitted data of another transaction.
- Phantom read: when the current transaction reads a record in a range, another transaction inserts a new record in that range. As a result, when the current transaction reads a record in that range again, the results are different. This is called phantom read!
- Unrepeatable read: If the same transaction reads the same data multiple times, the same result cannot be read (another transaction also operates on the data), this situation is called unrepeatable read!
The difference between illusory and unrepeatable:
- The former is a range, the latter is its own data content, from the overall result, both of the performance of the two read results are inconsistent!
3. What are the transaction isolation levels from low to high? What is the default MySQL?
MySQL > transaction isolation level MySQL > transaction isolation level
-
Isolation level of transactions: Low to high (Uncommitted reads => Committed reads => Repeatable reads => serializable)
-
Read Uncommitted: Reads Uncommitted data. Changes in a transaction can be Read even if they are not committed. Committed data is dirty.
-- Dirty reading examples -- Operation little Sister configured a paid course activity. The original price of 500 yuan course was configured to 50 yuan, but the transaction was not submitted. At this moment, you just saw that the course was so cheap and ready to buy, but the operation sister immediately rolled back the transaction, reconfigured and submitted the transaction, and when you prepared to place the order, you found that the price changed back to 500 yuan Copy the code
-
Read Committed: When a transaction starts, only the Committed transactions can be Committed. Running the same query twice in a transaction can result in different results. Also known as unrepeatable reads, also known as magic Read problems!
-- Examples of non-repeatability: -- Lao Wang had 1000 points in Xiaodi class and was going to exchange the Interview Special Course. There were 1000 points in the database, but Lao Wang's girlfriend also logged in elsewhere and exchanged 1000 points for SpringCloud Micro Service Special Course and submitted the transaction before Lao Wang; When the system helped Lao Wang to exchange the interview Special Courses, it found that the points were expected to be lost, and the exchange failed. -- Transaction A read the data in advance, and then transaction B updates the data and commits the transaction. When transaction A reads the data again, the data has changed! Copy the code
-
Repeatable Read: default transaction isolation level of mysql to solve the problem of dirty and unrepeatable reads. Phantom Read problems exist.
Magic reading problem: MySQL’s InnoDB engine automatically helps us with MVCC, i.e. multi-version concurrency control!
-- Magic reading examples: - Lao wang 1000 points in the little drops of classroom and get ready to go for the interview subject curriculum, query the database really have 1000 points, Lao wang's girlfriend, and other places at the same time, the login for the first interview seminar, Lao wang's transaction commit findings suggest buying has existed for the course, before reading is useless, like the illusion. Copy the code
-
Serializable: Solve dirty read, unrepeatable read, illusable read, can guarantee transaction security, but force all transactions to execute in serial (i.e., one transaction executes, other transactions need to queue), so concurrency efficiency is low!
-
4. How does MySQL solve the problem of non-repeatability and phantom reading?
- Non-repeatable read: When the same data is modified, inconsistency may occur. The solution is to add a row lock
- Phantom read: For a batch of data, mainly in the operation of adding and deleting. Solving phantom reading requires locking the entire table
Refer to the article 1:blog.csdn.net/nhlbengbeng…
Refer to the article 2:blog.csdn.net/sanyuesan00…
For MVCC, this is explained in High Performance MySQL:
4.1 MySQL Solves the Problem of Unrepeatable Read
In MySQL, the default transaction isolation boundary is repeatable reads. To solve the problem, InnoDB uses MVCC(Multi-version concurrency control) [based on optimistic locking].
-- MVCC(Multi-version concurrency Control) uses two hidden columns (create version number and delete version number) at the end of each piece of data. Each transaction starts with a ** incremented current transaction version number **!
- MVCC new
begin; -- Assuming the current transaction version =1
insert into user (id,name,age) values (1, "zhang",10); -- Added, current transaction version number is 1
insert into user (id,name,age) values (2, "bill",12); -- Added, current transaction version number is 1
commit; Commit transaction
Copy the code
id | name | age | create_version | delete_version |
---|---|---|---|---|
1 | Zhang SAN | 10 | 1 | NULL |
2 | Li si | 12 | 1 | NULL |
Insert into create_version; insert into create_version;
-- MVCC delete: The delete operation directly updates the deleted version number of row data to the version number of the current transaction
begin;Assume the current transaction version =3
delete from user where id = 2;
commit; Commit transaction
Copy the code
id | name | age | create_version | delete_version |
---|---|---|---|---|
1 | Zhang SAN | 10 | 1 | NULL |
2 | Li si | 12 | 1 | 3 |
-- MVCC update operation: use delete + add mode to implement, first mark the current data as delete, and then add a new data
begin;Assume the current transaction version =10
update user set age = 11 where id = 1; Update, the current transaction version number is 10
commit; Commit transaction
Copy the code
id | name | age | create_version | delete_version |
---|---|---|---|---|
1 | Zhang SAN | 10 | 1 | 10 |
2 | Li si | 12 | 1 | 3 |
1 | Zhang SAN | 11 | 10 | NULL |
-- MVCC query operation:
begin;Transaction ID = 12
select * from user where id = 1;
commit; Commit transaction
Copy the code
Query Operation To avoid querying old data or data that has been changed by other transactions, the following conditions must be met:
The version of the current transaction must be greater than or equal to the creation version create_version
2. The version of the current transaction must be smaller than the deleted version delete_version, or the deleted version delete_version must be NULL
That is: (create_version < = current_version < delete_version) | | (create_version < = current_version && delete_version – = NULL), In this way, you can avoid querying data modified by other transactions. In the same transaction, repeatable read is realized!
The result should be:
id | name | age | create_version | delete_version |
---|---|---|---|---|
1 | Zhang SAN | 11 | 10 | NULL |
4.2 MySQL Solves the phantom problem
What is illusory, as follows:
InnoDB’s RR avoids this illusion through the MVCC mechanism
Snapshot read and current read
Make the data repeatable, but the data we read may be historical data, not the latest data in the database. This method of reading historical data is called snapshot read, and the method of reading the latest version of the database is called current read.
Snapshot Reading
When performing a select operation, InnoDB will perform a snapshot read by default. Innodb will record the result of the select operation. After the select operation, innoDB will return the snapshot data, even if other transactions are committed, the data of the current select is not affected.
The snapshot is generated when the first select is performed, that is, when A starts A transaction and then does nothing, B inserts A data and then commits, and A performs A select, then the data returned by B will have the data added by B. It doesn’t matter if there are any other transactions commit after that, because the snapshot has already been generated and the subsequent SELECT is based on the snapshot.
The current reading
Operations that modify data (update, INSERT, delete) use the current read mode. During these operations, the latest version number record is read, and the version number is changed to the version number of the current transaction after the write operation, so that even the data committed by other transactions can be queried. If you want to update a record, but the data has been deleted and committed in another transaction, the update will cause a conflict, so you need to know the latest data at the time of the update. It is because of this that hallucinations result.
-
In the case of snapshot reads, mysql uses MVCC to avoid phantom reads.
-
Mysql uses X lock or next-key to avoid other transaction changes in the current read case:
- Isolation level using serialized reads
- (UPDATE, delete) When a WHERE condition is a primary key, phantom reads are handled by adding record locks(index locks/row locks) to primary key indexes
- (update, delete) If the where condition is not the primary key, the next-key lock is used. Next-key is a combination of record locks(index locks/row locks) and gap locks(gap locks that lock not only the data that needs to be used, but also the data around that data)
A next-key Lock is a Lock that is used on a transaction select so that it blocks when another transaction makes changes to the data in range. Cannot add an X lock to a record that has a shared lock.
select * from table where id<6 lock in share mode; - Shared lock
select * from table where id<6 for update; - the exclusive lock
Copy the code
About next – key locks, please refer to www.cnblogs.com/zhoujinyi/p…
5. MySQL common storage engine?
- MySQL 5.5 uses the MYISAM engine, and 5.5 and later uses the InnoDB engine
- Differences between the two:
The difference between a | InnoDB | MYISAM |
---|---|---|
The transaction | support | Does not support |
Locking granularity | Row lock, suitable for high concurrency | Table locks, not suitable for high concurrency |
Whether the default | The default | The default |
Support foreign keys | Support foreign keys | Does not support |
Applicable scenario | Read and write balance, write more read less scenario, need transaction | Read more write less scene, no transaction required |
The full text indexing | Not supported (plugin or use ElasticSearch) | support |
MySQL > alter table lock; MySQL > alter table lock;
- The smaller the lock granularity, the higher the concurrency support!
Refer to my blog on MySQL lock related issues
MySQL > select * from ‘MySQL’;
Refer to my blog MySQL index analysis and related interview questions
8. Three paradigms of database design?
-
First Normal form (ensuring atomicity for each column)
-
Second normal form (make sure that every column in the table is related to the primary key)
-
Third normal Form (make sure that each column is directly related to the primary key column, not indirectly) :
- For example, when designing an order data table, the customer number can be used as a foreign key to establish a relationship with the order table. It is not possible to add fields to the order form for other information about the customer, such as name, company affiliation, etc.
Reference: blog.csdn.net/huangyaa729…
9. What is the sequence of MySQL query commands?
- The ORDER of query commands is: SELECT=> FROM=> WHERE=> GROUP BY => HAVING => ORDER BY
select See which result fields
from -- Which table to query from
where -- Preliminary filtration conditions
group by -- Grouping after filtering [key point]
having -- Secondary filtering of grouped data [key point]
order by In what order to sort
SELECT * num FROM chapter GROUP BY video_id HAVING num >10 ORDER BY video_id DESC
Copy the code
MySQL > alter table select * from VARCHA;
Compare the item | char(16) | varchar(16) |
---|---|---|
The length of the feature | Fixed length, store characters | The length of the variable, store characters |
Short length condition | If the length of the insert is smaller than the defined lengthSpaces | If the length is smaller than the defined length, pressActual insert length storage |
performance | Access faster than VARCHARfastMuch more | Access faster than charslowMuch more |
Usage scenarios | Suitable for very short storage,A fixed-length string, such asMobile phone no.MD5 value, etc | Suitable for use inThe length is not fixed, such asShipping address, emailAddress etc. |
MySQL > select DATETIME from TIMESTA;
type | Occupy the byte | The scope of | Time zone problem |
---|---|---|---|
datetime | 8 bytes | 1000-01-01 00:00:00 to 9999-12-31 23:59:59 | storageTime zone independentWill not change |
timestamp | 4 bytes | 1970-01-01 00:00:01 to 2038-01-19 11:14:07 | The store isTime zone dependentChanges with the time zone of the database |
- Why timestamp only goes to 2038
MySQL timestamp is 4 bytes and the maximum value is 2 ^ 31 minus 1.
2147483647
B: Beijing time is:
2038- 01- 19 11:14:07
Copy the code
What is the difference between # and $in Mybatis?
#
Can prevent SQL injection, it will allThe parameter is passed as a stringTo deal with.#
Preventing SQL injection at the bottom is equivalent to using thePreparedStatementPrecompile SQL statements to prevent SQL injection$
Concatenate the incoming parameters into SQL for executionTable name and field nameParameters,$
The corresponding parameters should be provided by the server side.- JDBC SQL injection in the case: blog.csdn.net/ashleyjun/a…
- SQL injection example:
SELECT * FROM users WHERE `username` = ' 'OR'1 = 1'
Copy the code
Figure see, although I did not input the correct user name, but found a lot of user information, this is SQL injection!
13. MySQL large data volume SQL page optimization idea?
Problem: online database of a commodity table data volume of tens of millions, do deep paging when the performance is very slow, what are the optimization ideas?
-Symptom: Ten million level data is normal, such as data flow, logging, etc., the normal deep paging database will be very slow-Slow cause: SELECTMySQL > select * from product limit N,M - MySQL > select * from product limit N,M - MySQL > select * from product limit N,M - MySQL > select * from product limit N,M - MySQL > select * from product limit N,MCopy the code
Solution:
-- 1, can use the back-end cache Redis, front-end cache localstorage
Select * from ElasticSearch
Mysql > select * from 'mysql'
For example, title and cateory are set as the composite indexes of the table to improve query efficiency
select title,cateory from product limit 1000000.100
-- 4, if the ID is increment and there is no intermediate delete data, use sub-query optimization to locate the ID of the offset position
-- This method is time-consuming because the first 1000000 rows need to be retrieved, and then the 1000000-1000500 target data needs to be retrieved
select * from oper_log where type='BUY' limit 1000000.100; - 5 seconds
Select id from first 1000000 rows and select id from first 1000000 rows
select id from oper_log where type='BUY' limit 1000000.1; 0.4 seconds -
- to do a subquery, because increasing is a primary key, so id > = 1000000 rows id value, this is equivalent to skip scan before 100000 rows of data, directly from the article 100 started to retrieve back 1000000 data
select * from oper_log where type='BUY' and id> =(select id from oper_log where type='BUY' limit 1000000.1) limit 100; 0.8 seconds -
Copy the code
14. Common types and functions of MySQL logs?
-1. Redo log:-Purpose: To ensure the persistence of a transaction in case of a failure and dirty pages are not written to disk. Restarting the database causes redo log execution to reach transaction consistency-2. Undo rollback logs-Purpose: To ensure atomicity of data, record a version of the data before the transaction occurred, used for rollback. Innodb transaction repeatable read and read committed isolation levels are achieved through MVCC +undo-3. Errorlog Indicates error logs-Effect: Error information about Mysql starting, stopping, or running-4. Slow Query log Slow query logs-Description: Records the SQL statements that take a long time to execute. The time threshold can be configured. Only the SQL statements that take a long time to execute are recorded-5. Binlog Binary logs-Function: Used for master/slave replication to achieve master/slave synchronization-6. Relay log Indicates trunk logs-Function: Used for master/slave database synchronization. The binlog sent by the master database is saved locally and then played back by the slave database-7. General log-Function: Records database operation details. This function is disabled by default. Enabling this function reduces database performanceCopy the code
15. How is the MySQL transaction feature implemented?
-
Isolation is achieved with locks
-
Persistence is achieved through the redo log
-
Atomicity through undo log rollback log: MySQL database in InnoDB storage engine, also use undo log to achieve multi-version concurrency control
- When a record is deleted, the undo log records a corresponding INSERT record.
- When you insert a record, the undo log records a corresponding DELETE record.
- When a record is updated, the undo log records a corresponding update record.
-
MySQL ensures consistency through atomicity, isolation, and persistence. C(consistency) is the end, A(atomicity), I(isolation), D(persistence) is the means, is to ensure consistency, the means provided by the database. The database must implement AID to achieve consistency.
For details about MySQL locks, see Learn about MySQL locks