1. What are the three paradigms of database?
- First Normal Form (1NF) : Fields are atomic and non-divisible. (All relational database systems meet the first normal form. Fields in database tables are single-attribute and non-divisible.)
- The second normal form (2NF) is established on the basis of the first normal form (1NF), that is to satisfy the second normal form (2NF) must first satisfy the first normal form (1NF). It is required that each instance or row in a database table must be uniquely regionable. It is often necessary to add a column to a table to store the unique identity of each instance. This unique attribute column is called the primary key or primary key.
- To satisfy the third normal form (3NF), one must first satisfy the second normal form (2NF). In short, the third normal Form (3NF) requires that one database table does not contain non-primary keyword information that is already contained in other tables. So the third normal form has the following characteristics: >>1. Each column has only one value >>2. Every line is distinguishable. >>3. Each table does not contain non-primary keyword information that other tables already contain.
2. What experience do you have in database optimization?
- PreparedStatement is generally better than PreparedStatement: a SQL Statement is sent to the server for execution. PreparedStatement involves syntax checking, semantic analysis, compilation, and caching.
- Having foreign key constraints can affect insert and delete performance, so remove foreign keys when designing the database if the program can ensure data integrity.
- 4. UNIONALL is much faster than UNION, so use UNIONALL if you can be sure that the two result sets that are merged do not contain duplicate data and do not need sorting. The >>UNION and UNION ALL keywords combine two sets of results into one, but both differ in usage and efficiency. >1. Processing of duplicate results: UNION will filter out duplicate records after linking tables, and UNION All will not remove duplicate records. >2. Sorting: Union will sort by field order; The UNION ALL simply merges the two results and returns.
3. Please briefly describe the types of commonly used indexes.
- Plain index: Creates an index for a database table
- Unique index: Similar to normal indexes, except that the values of the MySQL database index columns must be unique, but empty values are allowed
- Primary key index: this is a special unique index that does not allow null values. The primary key index is typically created while the table is being built
- Composite indexes: In order to further extract the efficiency of MySQL, we should consider creating composite indexes. Combine multiple fields in a database table as a composite index.
4. What is the working mechanism of indexes in mysql database?
- A database index is a sorted data structure in a database management system to help query and update data in a database table quickly. Indexes are usually implemented using B trees and their variant B+ trees
MySQL > alter table MySQL > alter table MySQL > alter table MySQL > alter table MySQL
-
Check whether MySQL is running: Run service mysqlstatus on Debian and service mysqld status on RedHat
-
To start or stop the MySQL service, run the service mysqld start command. Run the service mysqld stop command to stop the service
-
Shell Login to MySQL: Run MySQL -u root -p
-
To list all databases, run show databases;
-
Switch to a database and work on it: Run the use database command
name; Enter the name database
Database of name
-
To list all tables in a database: show tables;
-
Get the names and types of all Field objects in the table :describe table_name;
6. Replication principle and process of mysql.
Mysql’s built-in replication capabilities are the foundation for building large, high-performance applications. The data of Mysql is distributed to multiple systems. This distribution mechanism is realized by copying the data of a host of Mysql to other hosts and performing it again. * One server acts as the master server and one or more other servers act as slave servers during replication.
The master server writes updates to the binary log file and maintains an index of the file to track the log cycle. These logs record updates sent to the slave server. When a slave connects to the master, it informs the master of the location of the last successful update read in the log.
The slave server receives any updates that have occurred since then, then blocks and waits for the master server to notify of new updates. The process is as follows
1. The master server logs updates to binary log files.
2. The secondary server copies the binary logs of the primary server to its own replay logs. 3. Redo the time in the relay log from the server and apply the update to your own database.
7. What replication types does mysql support?
- Statement-based replication: SQL statements executed on the master server and the same statements executed on the slave server. MySQL uses statement-based replication by default, which is efficient. Row-based replication is automatically selected when it is found to be impossible to replicate exactly.
- Line-based replication: Copying changes over instead of executing commands on a slave server. Supported as of mysql5.0
- Mixed type replication: statement-based replication is used by default, and row-based replication is used when statement-based replication cannot be exact.
8. What is the difference between MyISam and InnoDB in mysql?
-
Transaction Support >
MyISAM: The emphasis is on performance, atomicity per query, faster execution than InnoDB types, but no transaction support. >
InnoDB: provides advanced database functions such as transaction support transactions and external keys. Transaction-safe (ACID Compliant) tables with COMMIT, Rollback, and Crash recovery capabilities.
-
InnoDB supports row-level locking, while MyISAM supports table-level locking. InnoDB supports row-level locking, while MyISAM supports table-level locking. You can insert new data at the end of the table.
-
InnoDB supports MVCC, while MyISAM does not
-
InnoDB supports foreign keys, while MyISAM does not
-
Table primary key >
MyISAM: allows tables without any indexes and primary keys. Indexes are the addresses of rows. >
InnoDB: If there is no primary key or non-empty unique index, it automatically generates a 6 byte primary key (not visible to the user). The data is part of the primary index, and the additional index holds the value of the primary index.
-
InnoDB does not support full-text indexing, while MyISAM does.
-
Portability, backup, and Recovery
MyISAM: Data is stored as files, so it is very convenient for cross-platform data transfer. You can operate on a single table during backup and restore. >
InnoDB: The free solutions can be copying data files, backing up binlogs, or using mysqldump, which is relatively painful when the data volume is tens of gigabytes
-
Storage Structure >
MyISAM: Each MyISAM is stored as three files on disk. The name of the first file starts with the name of the table, and the extension indicates the file type. . FRM File storage table definition. The extension name of the data file. MYD (MYData). The index file extension is. MYI (MYIndex). >
InnoDB: All tables are stored in the same data file (can be multiple files, or separate table space files), InnoDB table size is only limited by the size of the operating system file, generally 2GB.
Varchar (50); varchar(50);
- Varchar differs from char: CHAR is a fixed-length type and varchar is a variable-length type.
- The meaning of 50 in varchar(50) : holds up to 50 bytes
- 20 in int (20) Int (M) in M Indicates the Maximumdisplay width for INTEGER types. The maximumlegal display width is 255.
10. What are the four transaction isolation levels supported by InnoDB in MySQL?
- >> At this isolation level, all transactions can see the results of other Uncommitted transactions. This isolation level is rarely used in real-world applications because its performance is not much better than other levels. Reading uncommitted data is also known as Dirty reads.
- Read Committed >> This is the default isolation level for most database systems (but not MySQL). It satisfies a simple definition of isolation: a transaction can only see the changes made by committed transactions. This isolation level also supports what is called Nonrepeatable Read, because other instances of the same transaction may have new COMMITS in the process of that instance, so the same SELECT may return different results.
- Repeatable Read >> This is MySQL’s default transaction isolation level and ensures that multiple instances of the same transaction will see the same rows when reading data concurrently. In theory, though, this could lead to another thorny problem: PhantomRead. Simply put, phantom reading refers to when a user reads a row in a certain range, another transaction inserts a new row in that range, and when the user reads a row in that range, a new phantom row is found. InnoDB and Falcon storage engines address this problem through the Multiversion Concurrency Control (MVCC) mechanism. Note: Multiple versions only solve the problem of non-repeatable reads, but gap locking (also known as concurrency control here) solves the illusion problem.
- Serializable >> This is the highest isolation level and solves the phantom read problem by forcing transactions to be ordered so that they cannot conflict with each other. In short, it places a shared lock on each read row. At this level, a lot of timeouts and lock contention can result.
Let your eyes rest and continue!
11. There is a large field X in the table (for example, the type of text), and the field X is not frequently updated. It is mainly read.
If the fields have large fields (text,blob) in them, and these fields are not accessed very often, then the combination becomes a disadvantage. MYSQL database records are stored in rows and data blocks are of fixed size (16K). The smaller each record is, the more records are stored in the same block. At this point, large fields should be removed, so that most of the queries for small fields can be handled more efficiently. When large fields need to be queried, associated queries are inevitable, but worthwhile. When unpacked, the UPDAE for the fields updates multiple tables
12. What does InnoDB engine row lock in MySQL accomplish (or implement)?
InnoDB locks rows by locking index entries, unlike Oracle, which locks rows in data blocks. InnoDB’s row-locking implementation means that InnoDB uses row-locking only when data is retrieved by index criteria. Otherwise, InnoDB uses table locking!
13. What are the global parameters that control memory allocation in MySQL?
-
Key
buffer
Size: > * key
buffer
Size Specifies the size of the index buffer, which determines the speed of index processing, especially index reading. By checking the status value Key
read
Requests and Key
Reads, you can know the key
buffer
Size Is set properly. Proportion of key
reads /key
read
Requests should be as low as possible, preferably 1:100, 1:1000
Read %’ get). > * key
buffer
Size only applies to MyISAM tables. Use this value even if you do not use MyISAM tables, but the internal temporary disk table is MyISAM. You can use the check status value created
tmp
disk
Tables knows the details. For a machine with 1 GB of memory, the recommended value is 16M (8-64M) > * key if the MyISAM table is not used
buffer
Size Precautions >>>1. Single key
The buffer size cannot exceed 4G. If the buffer size exceeds 4G, you may encounter the following three bugs: >>>>> bugs.mysql.com/bug.php?id=… > > > > > bugs.mysql.com/bug.php?id=… > > > > > bugs.mysql.com/bug.php?id=… > > > 2. Suggest the key
Buffer is set to 1/4 of physical memory (for MyISAM engines), or even 30% to 40% of physical memory, if key
buffer
If size is too large, the system will frequently change pages, reducing system performance. Since MySQL uses the operating system’s cache to cache data, we need to reserve enough memory for the system; In many cases the data is much larger than the index. >>>3. If the machine performance is superior, you can set multiple keys
Buffer, each with a different key
Buffer to cache specialized indexes
-
innodb
buffer
pool_size >
Represents the buffer pool size in bytes, the memory area for InnoDB to cache table and index data. Mysql default value is 128 MB. The maximum value depends on your CPU architecture. On a 32-bit OS it is 4294967295(2^32-1) and on a 64-bit OS it is 18446744073709551615 (2^64-1). >
On a 32-bit operating system, the maximum size of the CPU and operating system in use is lower than the maximum value set. If the buffer pool size is greater than 1 gb, set InnoDB
buffer
pool
Instances have values greater than 1. > * Data reads and writes are very fast in memory, InnoDB
buffer
pool
Size reduces reads and writes to disks. Memory data is flushed to disk once the data is committed or checkpoint conditions are met. However, memory is also used by other processes in the operating system or database, so bufferPool sizes are generally set to 3/4 to 4/5 of total memory. If not set properly, memory usage can be wasteful or excessive. For busy servers, the buffer pool is divided into multiple instances to improve system concurrency and reduce contention between threads for read and write caches. The size of a buffer pool is first determined by InnoDB
buffer
Pool_instances has a small impact.
-
query
cache
size >
When mysql receives a query of type SELECT, it hashes the query to obtain a hash value, which is then used in the query cache to match the value. If no query is matched, The hash value is stored in a hash list, and the query result set is stored in the cache. Each hash node in the hash list stores the corresponding query result set in the cache. And some information about the tables involved in the query; If the same query is matched using the hash value, the corresponding query result set in the cache is directly returned to the client. If any data in a mysql table changes, the mysql database notifies the Query Cache that all queries related to the table are invalidated and frees the occupied memory address. >
Advantages and disadvantages of Query Cache >> 1. Resource consumption caused by hash calculation and hash lookup of Query statements. Mysql hashes each query of select type to see if the cache exists. Although hashing and searching are efficient enough, the cost of a query is negligible. However, when it comes to high concurrency, When there are thousands of queries, the overhead of hashing and lookups becomes important; >> 2. Query cache invalidation. If the table changes frequently, the query Cache failure rate is very high. Table changes are not only changes to the data in the table, but also any changes to the structure or index. >> 3. Queries with different SQL but the same result set will be cached, resulting in excessive consumption of memory resources. The cache considers the SQL to be different depending on its character case, space, or comment (because their hash value will be different). >> 4. A large number of memory fragments are caused by disregarding related parameter Settings. Related parameter Settings will be described later.
-
read
buffer
Size > is the size of the MySQL read buffer. A sequential scan of the table will allocate a read buffer, and MySQL will allocate a memory buffer for it. read
buffer
The size variable controls the size of this buffer. If sequential scanning requests for tables are very frequent and you think frequent scanning is going too slowly, you can improve performance by increasing the value of this variable as well as the memory buffer size.
14. If there is only one field in a table with type VARCHAR(N), utF8 encoding, what is the maximum value of N (exact order of magnitude)?
Utf8 takes up to 3 bytes per character. The length of the row defined by MySQL cannot exceed 65535. Therefore, the maximum value of N is calculated as :(655,500-1-2)/3. 1 is subtracted because the actual storage starts with the second byte, 2 is subtracted because the actual character length is stored at the list length, and divided by 3 because of the UTF8 limit: each character can take up to 3 bytes.
15. What are the advantages and disadvantages of [SELECT *] and [SELECT all fields]?
- The former parses the data dictionary, the latter does not
- The output order of the results is the same as the order of the table columns, and the order of the specified fields.
- Alter table alter table alter table alter table
- The latter can be indexed for optimization, while the former cannot
- The latter is more readable than the former
16. What are the similarities and differences between HAVNG clause and WHERE?
- Syntactically: where is the column name of the table, having is the select result alias
- Where the number of rows read from the table, having the number of rows returned to the client
- Index: Where can use indexes, HAVING can’t use indexes, can only operate on temporary result sets
- You can’t use an aggregate function after WHERE. Having uses an aggregate function exclusively.
17. Insert from MySQL when record does not exist, update from MySQL when record does exist
INSERT INTO table (a, B, C) VALUES (1,2,3) ON DUPLICATE KEYUPDATE C = C +1;
MySQL > insert and update select statements
SQL insert into student (stuid,stuname,deptid) select 10,'xzm',3from student where stuid > 8;
update student a inner join student b on b.stuID=10 seta.
stuname=concat(b.stuname, b.stuID) where a.stuID=10 ;
Copy the code
Your attention is my motivation to continue to move forward, thank you for reading, appreciation.