1. What are the three paradigms of database?

  1. First Normal Form (1NF) : Fields are atomic and non-divisible. (All relational database systems meet the first normal form. Fields in database tables are single-attribute and non-divisible.)
  2. The second normal form (2NF) is established on the basis of the first normal form (1NF), that is to satisfy the second normal form (2NF) must first satisfy the first normal form (1NF). It is required that each instance or row in a database table must be uniquely regionable. It is often necessary to add a column to a table to store the unique identity of each instance. This unique attribute column is called the primary key or primary key.
  3. To satisfy the third normal form (3NF), one must first satisfy the second normal form (2NF). In short, the third normal Form (3NF) requires that one database table does not contain non-primary keyword information that is already contained in other tables. So the third normal form has the following characteristics: >>1. Each column has only one value >>2. Every line is distinguishable. >>3. Each table does not contain non-primary keyword information that other tables already contain.

2. What experience do you have in database optimization?

  1. PreparedStatement is generally better than PreparedStatement: a SQL Statement is sent to the server for execution. PreparedStatement involves syntax checking, semantic analysis, compilation, and caching.
  2. Having foreign key constraints can affect insert and delete performance, so remove foreign keys when designing the database if the program can ensure data integrity.
  3. 4. UNIONALL is much faster than UNION, so use UNIONALL if you can be sure that the two result sets that are merged do not contain duplicate data and do not need sorting. The >>UNION and UNION ALL keywords combine two sets of results into one, but both differ in usage and efficiency. >1. Processing of duplicate results: UNION will filter out duplicate records after linking tables, and UNION All will not remove duplicate records. >2. Sorting: Union will sort by field order; The UNION ALL simply merges the two results and returns.

3. Please briefly describe the types of commonly used indexes.

  1. Plain index: Creates an index for a database table
  2. Unique index: Similar to normal indexes, except that the values of the MySQL database index columns must be unique, but empty values are allowed
  3. Primary key index: this is a special unique index that does not allow null values. The primary key index is typically created while the table is being built
  4. Composite indexes: In order to further extract the efficiency of MySQL, we should consider creating composite indexes. Combine multiple fields in a database table as a composite index.

4. What is the working mechanism of indexes in mysql database?

  • A database index is a sorted data structure in a database management system to help query and update data in a database table quickly. Indexes are usually implemented using B trees and their variant B+ trees

MySQL > alter table MySQL > alter table MySQL > alter table MySQL > alter table MySQL

  1. Check whether MySQL is running: Run service mysqlstatus on Debian and service mysqld status on RedHat
  2. To start or stop the MySQL service, run the service mysqld start command. Run the service mysqld stop command to stop the service
  3. Shell Login to MySQL: Run MySQL -u root -p
  4. To list all databases, run show databases;
  5. Switch to a database and work on it: Run the use database commandname; Enter the name databaseDatabase of name
  6. To list all tables in a database: show tables;
  7. Get the names and types of all Field objects in the table :describe table_name;

6. Replication principle and process of mysql.

Mysql’s built-in replication capabilities are the foundation for building large, high-performance applications. The data of Mysql is distributed to multiple systems. This distribution mechanism is realized by copying the data of a host of Mysql to other hosts and performing it again. * One server acts as the master server and one or more other servers act as slave servers during replication.

The master server writes updates to the binary log file and maintains an index of the file to track the log cycle. These logs record updates sent to the slave server. When a slave connects to the master, it informs the master of the location of the last successful update read in the log.

The slave server receives any updates that have occurred since then, then blocks and waits for the master server to notify of new updates. The process is as follows

1. The master server logs updates to binary log files.

2. The secondary server copies the binary logs of the primary server to its own replay logs. 3. Redo the time in the relay log from the server and apply the update to your own database.

7. What replication types does mysql support?

  1. Statement-based replication: SQL statements executed on the master server and the same statements executed on the slave server. MySQL uses statement-based replication by default, which is efficient. Row-based replication is automatically selected when it is found to be impossible to replicate exactly.
  2. Line-based replication: Copying changes over instead of executing commands on a slave server. Supported as of mysql5.0
  3. Mixed type replication: statement-based replication is used by default, and row-based replication is used when statement-based replication cannot be exact.

8. What is the difference between MyISam and InnoDB in mysql?

  1. Transaction support > *MyISAM: Emphasis on performance, atomicity per query, faster execution than InnoDB types, but no transaction support. > *InnoDB: provides transaction support transactions, external keys and other advanced database features. Transaction-safe (ACID Compliant) tables with COMMIT, Rollback, and Crash recovery capabilities.
  2. InnoDB supports row-level locking, while MyISAM supports table-level locking. InnoDB supports row-level locking, while MyISAM supports table-level locking. You can insert new data at the end of the table.
  3. InnoDB supports MVCC, while MyISAM does not
  4. InnoDB supports foreign keys, while MyISAM does not
  5. Table primary key > *MyISAM: allows tables without any indexes and primary keys, indexes are the addresses that hold rows. > *InnoDB: If there is no primary key or non-empty unique index, it automatically generates a 6-byte primary key (invisible to the user). The data is part of the primary index, and the additional index holds the value of the primary index.
  6. InnoDB does not support full-text indexing, while MyISAM does.
  7. Portability, backup, and recovery > *MyISAM: Data is stored as files, so it can be handy in cross-platform data movement. You can operate on a single table during backup and restore. > *InnoDB: free solutions can be to copy data files, back up binlogs, or use mysqldump, which is relatively painful when the data volume is tens of gigabytes
  8. Storage structure > *MyISAM: Each MyISAM is stored as three files on disk. The name of the first file starts with the name of the table, and the extension indicates the file type. . FRM File storage table definition. The extension name of the data file. MYD (MYData). The index file extension is. MYI (MYIndex). > *InnoDB: All tables are stored in the same data file (can be multiple files, or independent table space files), InnoDB table size is only limited by the size of the operating system file, generally 2GB.

Varchar (50); varchar(50);

  1. Varchar differs from char: CHAR is a fixed-length type and varchar is a variable-length type.
  2. The meaning of 50 in varchar(50) : holds up to 50 bytes
  3. 20 in int (20) Int (M) in M Indicates the Maximumdisplay width for INTEGER types. The maximumlegal display width is 255.

10. What are the four transaction isolation levels supported by InnoDB in MySQL?

  1. >> At this isolation level, all transactions can see the results of other Uncommitted transactions. This isolation level is rarely used in real-world applications because its performance is not much better than other levels. Reading uncommitted data is also known as Dirty reads.
  2. Read Committed >> This is the default isolation level for most database systems (but not MySQL). It satisfies a simple definition of isolation: a transaction can only see the changes made by committed transactions. This isolation level also supports what is called Nonrepeatable Read, because other instances of the same transaction may have new COMMITS in the process of that instance, so the same SELECT may return different results.
  3. Repeatable Read >> This is MySQL’s default transaction isolation level and ensures that multiple instances of the same transaction will see the same rows when reading data concurrently. In theory, though, this could lead to another thorny problem: PhantomRead. Simply put, phantom reading refers to when a user reads a row in a certain range, another transaction inserts a new row in that range, and when the user reads a row in that range, a new phantom row is found. InnoDB and Falcon storage engines address this problem through the Multiversion Concurrency Control (MVCC) mechanism. Note: Multiple versions only solve the problem of non-repeatable reads, but gap locking (also known as concurrency control here) solves the illusion problem.
  4. Serializable >> This is the highest isolation level and solves the phantom read problem by forcing transactions to be ordered so that they cannot conflict with each other. In short, it places a shared lock on each read row. At this level, a lot of timeouts and lock contention can result.

Rest your eyes and continue!

11. There is a large field X in the table (for example, the type of text), and the field X is not frequently updated. It is mainly read.

If the fields have large fields (text,blob) in them, and these fields are not accessed very often, then the combination becomes a disadvantage. MYSQL database records are stored in rows and data blocks are of fixed size (16K). The smaller each record is, the more records are stored in the same block. At this point, large fields should be removed, so that most of the queries for small fields can be handled more efficiently. When large fields need to be queried, associated queries are inevitable, but worthwhile. When unpacked, the UPDAE for the fields updates multiple tables

12. What does InnoDB engine row lock in MySQL accomplish (or implement)?

InnoDB locks rows by locking index entries, unlike Oracle, which locks rows in data blocks. InnoDB’s row-locking implementation means that InnoDB uses row-locking only when data is retrieved by index criteria. Otherwise, InnoDB uses table locking!

13. What are the global parameters that control memory allocation in MySQL?

  1. KeybufferSize: > * keybufferSize Specifies the size of the index buffer, which determines the speed of index processing, especially index reading. By checking the status value KeyreadRequests and KeyReads, you can know the keybufferSize Is set properly. Proportion of keyreads /keyreadRequests should be as low as possible, preferably 1:100, 1:1000Read %’ get). > * keybufferSize only applies to MyISAM tables. Use this value even if you do not use MyISAM tables, but the internal temporary disk table is MyISAM. You can use the check status value createdtmpdiskTables knows the details. For a machine with 1 GB of memory, the recommended value is 16M (8-64M) > * key if the MyISAM table is not usedbufferSize Precautions >>>1. Single keyThe buffer size cannot exceed 4G. If you do, you are likely to encounter the following three bugs: >>>>>Bugs.mysql.com/bug.php?id=…

    >>>>> Bugs.mysql.com/bug.php?id=…

    >>>>> Bugs.mysql.com/bug.php?id=…

    > > > 2. Suggest the key
    Buffer is set to 1/4 of physical memory (for MyISAM engines), or even 30% to 40% of physical memory, if keybufferIf size is too large, the system will frequently change pages, reducing system performance. Since MySQL uses the operating system’s cache to cache data, we need to reserve enough memory for the system; In many cases the data is much larger than the index. >>>3. If the machine performance is superior, you can set multiple keysBuffer, each with a different keyBuffer to cache specialized indexes
  2. innodbbufferpool_size > Represents the buffer pool size in bytes, the memory area for InnoDB to cache table and index data. Mysql default value is 128 MB. The maximum value depends on your CPU architecture. On a 32-bit OS it is 4294967295(2^32-1) and on a 64-bit OS it is 18446744073709551615 (2^64-1). >On a 32-bit operating system, the maximum size of the CPU and operating system in use is lower than the maximum value set. If the buffer pool size is greater than 1 gb, set InnoDBbufferpoolInstances have a value greater than 1. >Data reads and writes are very fast in memory, InnoDBbufferpoolSize reduces reads and writes to disks. Memory data is flushed to disk once the data is committed or checkpoint conditions are met. However, memory is also used by other processes in the operating system or database, so bufferPool sizes are generally set to 3/4 to 4/5 of total memory. If not set properly, memory usage can be wasteful or excessive. For busy servers, the buffer pool is divided into multiple instances to improve system concurrency and reduce contention between threads for read and write caches. The size of a buffer pool is first determined by InnoDBBuffer *pool_instances Impact, of course, the impact is small.
  3. querycacheSize > * When mysql receives a query of type SELECT, it hashes the query to get a hash value, and then matches the hash value in the query cache. The hash value is stored in a hash list, and the query result set is stored in the cache. Each hash node in the hash list stores the corresponding query result set in the cache. And some information about the tables involved in the query; If the same query is matched using the hash value, the corresponding query result set in the cache is directly returned to the client. If any data in a mysql table changes, the mysql database notifies the Query Cache that all queries related to the table are invalidated and frees the occupied memory address. > * Advantages and disadvantages of Query cache >> 1. Resource consumption caused by hash calculation and hash lookup of query statements. Mysql hashes each query of select type to see if the cache exists. Although hashing and searching are efficient enough, the cost of a query is negligible. However, when it comes to high concurrency, When there are thousands of queries, the overhead of hashing and lookups becomes important; >> 2. Query cache invalidation. If the table changes frequently, the query Cache failure rate is very high. Table changes are not only changes to the data in the table, but also any changes to the structure or index. >> 3. Queries with different SQL but the same result set will be cached, resulting in excessive consumption of memory resources. The cache considers the SQL to be different depending on its character case, space, or comment (because their hash value will be different). >> 4. A large number of memory fragments are caused by disregarding related parameter Settings. Related parameter Settings will be described later.
  4. readbufferSize > is the size of the MySQL read buffer. A sequential scan of the table will allocate a read buffer, and MySQL will allocate a memory buffer for it. readbufferThe size variable controls the size of this buffer. If sequential scanning requests for tables are very frequent and you think frequent scanning is going too slowly, you can improve performance by increasing the value of this variable as well as the memory buffer size.

14. If there is only one field in a table with type VARCHAR(N), utF8 encoding, what is the maximum value of N (exact order of magnitude)?

Utf8 takes up to 3 bytes per character. The length of the row defined by MySQL cannot exceed 65535. Therefore, the maximum value of N is calculated as :(655,500-1-2)/3. 1 is subtracted because the actual storage starts with the second byte, 2 is subtracted because the actual character length is stored at the list length, and divided by 3 because of the UTF8 limit: each character can take up to 3 bytes.

*15. [SELECT What are the advantages and disadvantages of [SELECT all fields]?

  1. The former parses the data dictionary, the latter does not
  2. The output order of the results is the same as the order of the table columns, and the order of the specified fields.
  3. Alter table alter table alter table alter table
  4. The latter can be indexed for optimization, while the former cannot
  5. The latter is more readable than the former

16. What are the similarities and differences between HAVNG clause and WHERE?

  1. Syntactically: where is the column name of the table, having is the select result alias
  2. Where the number of rows read from the table, having the number of rows returned to the client
  3. Index: Where can use indexes, HAVING can’t use indexes, can only operate on temporary result sets
  4. You can’t use an aggregate function after WHERE. Having uses an aggregate function exclusively.

17. Insert from MySQL when record does not exist, update from MySQL when record does exist

INSERT INTO table (a, B, C) VALUES (1,2,3) ON DUPLICATE KEYUPDATE C = C +1;

MySQL > insert and update select statements

 

SQL insert into student (stuid,stuname,deptid) select 10,'xzm',3from student where stuid > 8;
update student a inner join student b on b.stuID=10 seta.
stuname=concat(b.stuname, b.stuID) where a.stuID=10 ;
Copy the code

Your concern and forwarding is my motivation to continue to move forward, thank you for reading, appreciation.