Mysql configuration file

  • Binary log log-bin: used for primary/secondary replication
  • Error log log-error: This log is disabled by default. It records serious warnings and error information, and details about each startup and shutdown
  • Query log Log: This function is disabled by default. It records query SQL statements. If this function is enabled, the mysql performance deteriorates because logging consumes system resources
  • The data file
    • FRM file: stores the table structure
    • Myd file: store table data
    • Myi file: stores table indexes

Mysql Logical Architecture

  • Mysql differs from other databases in its storage engine architecture, which is layer 3 in the figure. The plug-in storage engine architecture separates query processing from other system tasks and data storage and extraction. This feature enables developers to select an appropriate storage engine based on business requirements.
  • Layer 1: Connection layer, which mainly completes connection processing, authorization, and related security schemes (providing thread pool and realizing SECURE connection based on SSL)
  • Layer 2: Service layer, which mainly completes core service functions, SQL analysis and optimization, and provides read and write performance.
  • The third layer: the engine layer, which is really responsible for the storage and extraction of data in mysql. Select a storage engine based on site requirements.
  • Layer 4: Storage layer, which runs data stores on file systems (hardware dependent)

The storage engine

Index optimization analysis

  • Performance degradation SQL is slow, takes a long time to execute, and takes a long time to wait
    • Written SQL
    • Index failure (single value, compound)
    • Too many join associated queries
    • Server tuning and configuration of individual parameters (buffers, thread count, etc.)
  • Common JOIN queries
    • SQL Execution sequence
      • Written order
      • Mysql > execute sequence
    • In 7, join query

The index

  • The index profile

    • An index is a data structure that helps mysql efficiently retrieve data. That is, an index is a data structure, and it is an ordered, fast lookup data structure (which affects the query and ordering of SQL statements)
    • A data structure that satisfies a particular lookup algorithm
    • The indexes themselves are also large and are often stored on disk as index files
    • A general index refers to a B + tree. Clustered indexes, secondary indexes, composite indexes, prefix indexes, and unique indexes all use B + tree indexes by default. In addition to b+ tree indexes, there are hash indexes.
  • advantage

    • It improves the cost of data retrieval and reduces the IO cost of database
    • The data is sorted by index column, which reduces the cost of data sorting and CPU consumption
  • disadvantage

    • An index is also a table that holds the primary key and index fields and points to the records of the entity table. The index columns also occupy space
    • The index file is updated in the same way as the index file is updated in the same way as the index file is updated in the same way as the index file.
    • If a table has a large amount of data, it can take a lot of extra time and cost to build an optimal set of indexes
  • Classification of indexes

    • Single-value index: that is, an index contains only a single column. A table can have multiple single-column indexes

    • Unique index: The value of the indexed column must be unique, but null values are allowed

    • Composite index: An index contains multiple columns

    • The basic grammar

      • create
        • create [unique] index indexName on mytable(column(length))
        • alter mytable add [unique] index [indexName] on (column(length))
      • delete
        • drop index indexName on mytable
      • To view
        • show index from mytable
    • Index structure and index principle

    • Which cases need to be indexed

      • Primary keys automatically create indexes
      • Create indexes for frequently queried fields
      • Fields associated with other tables in a query that are indexed by foreign key relationships
      • Frequently updated fields are not suitable for creating indexes
      • Indexes are not created for fields that are not used in the WHERE condition
      • Single key/composite index selection? (High concurrency tends to create composite indexes)
      • A sorted field in a query that can be accessed through an index to speed sorting
      • Statistics or grouping fields in the query
    • Which situations are not suitable for indexing

      • Table record less
      • A watch that is often added, deleted, or modified
      • If a column contains a lot of duplicate content, indexing it doesn’t have much practical effect
  • Performance analysis

    • mysql query optimizer
      • Mysql has an optimizer module that is specifically responsible for optimizing the SELECT statement. Its main function is to provide the client with the best execution plan (the optimal data retrieval method) for the Query requested by the client by calculating and analyzing the statistics collected in the system.
    • Mysql common Bottlenecks
      • CPU: CPU saturation occurs when data is loaded into memory or read from disk
      • IO: Disk I/O bottlenecks occur when far more data is loaded than there is memory
      • Server hardware performance bottlenecks: TOP, free, iostat, vmstat Check the system performance
    • explain
      • Use the Explain keyword to simulate the optimizer’s execution of SQL queries to see how mysql handles your SQL statements. Analyze performance bottlenecks in your SQL queries or table structures
      • How to use?
        • explain + sql
        • Information contained
            • id
              • The serial number of a SELECT query that contains a set of numbers indicating the order in which the SELECT clause or operation table is executed in the query
              • Three of the following
                • The ids are the same and the execution sequence is from top to bottom
                • The ID is different. For subqueries, the ID sequence increases. A larger ID has a higher priority and is executed earlier
                • If the ids are the same and different, execute the ids with large ids first. If the ids are the same, execute them from top to bottom
            • select_type
              • SIMPLE: A SIMPLE select query that does not contain subqueries or unions
              • PRIMARY: If the query contains any complex subparts, the outermost query is marked as
              • SUBQUERY: Contains subqueries in SELEFT or WHERE lists
              • DERIVED: Subqueries contained in the FROM list are labeled as DERIVED, and MySQL executes these subqueries recursively, with the outer SELECT labeled as DERIVED
              • UNION: If the second SELECT appears after the UNION, it is marked as UNION, and if the UNION is contained in the subquery of the FROM clause, the outer SELECT is marked as DERIVED
              • UNION RESULT: SELECT the RESULT obtained from the UNION table
            • type
              • Shows what type of query is used
                • From best to worst: system>const>eq_ref>ref>range>index>all
                • System: single form line
                • Const: Used to compare the primary key or unique index, matching only one row. By placing the primary key in the WHERE list, mysql can convert the query to a constant
                • Eq_ref: For unique index scans, only one record in the table matches each index key. Common with primary keys or unique indexes
                • Ref: a non-unique index scan that returns all rows matching a single value
                • Range: Retrieves only rows in a given range, using an index to select rows. The key column shows which index to use
                • Index: full index table scan (only traverses the index tree, index files are usually smaller than data files, so it is faster than all)
                • All; A full table scan
            • possible_keys
              • Displays the index of the table that might be used. If a field involved in a query has an index, the index is listed but may not be used by the query
            • key
              • The actual used index. If the value is null, no index is used
              • If an overwrite index is used in the query (the fields of the query match the fields of the composite index), the index appears in the key list
            • key_len
              • Represents the number of bytes used in the index, which can be used to calculate the length of the index used in the query, as short as possible without loss of precision
              • The value displayed is the maximum possible length in the index field, not the actual length used. Key_len is calculated from the table definition, not retrieved from the table.
            • Ref – Shows which column of the index is used and, if possible, is a constant. Which columns or constants are used to find values on index columns
            • rows
              • How many rows per table are queried by the optimizer
            • Extra
              • Contains important information that is not suitable for display in other columns
                • Using filesort: indicates that mysql uses an external sort for data, rather than reading the data in the order of the indexes in the table. The sort operation that cannot be done with an index is called file sort
                • Using temporary: A temporary table is used to hold intermediate results. Mysql uses temporary tables to sort query results, commonly seen in order by and group by
                • Using index: indicates that the select operation uses an overwrite index,Access to rows of the table is avoidedThe efficiency is good.
                  • Using WHERE indicates that the index is used to perform a lookup of the index key value.
                  • If using WHERE is not present at the same time, it indicates that the index is used to read data rather than perform a lookup.
                  • Overwrite index: The query column is overwritten by the created composite index (no need to find the data file based on the index)
                • Using WHERE: Uses where for filtering
                • Using join buffer: The join buffer is used
                • Impossible WHERE: The condition value in where is false and does not filter
                • Select Tables Optimized Away: Optimize MIN/MAX operations based on indexes or COUNT(*) for MYISAM storage engine without GROUPBY clause, without having to wait until the execution phase is evaluated, the optimization is completed during query execution plan generation.
                • Distinct: The action of stopping the search for the same value after finding the first match
          • Some considerations for using indexes
            • Index failure (mysql will be tuned, order is not affected, useful to use)
              • Full value matching is recommended.
              • The best left prefix rule: If you index multiple columns, follow the left prefix rule. That is, the query starts at the left-most front of the index and does not skip columns in the index
              • Do not perform any operations (calculations, functions, (automatic or manual) type conversions) on index columns that will cause index invalidation and a full table scan
              • Range query causes invalidation (equivalent query is recommended)
              • Use overridden indexes whenever possible (index columns contain query columns, avoid SELECT *)
              • Use! =<> causes index invalidity (break after symbol (symbol current index can be sorted))
              • Is NULL, is not NULL The index is invalid
              • Like begins with a wildcard (‘ % ABC ‘)Index invalidation full table scan
                • If required, override indexes are used
              • Unquoted string types invalidate indexes (type conversions)
              • Use or sparingly, as it will invalidate the index.
              • When using group by and order by, pay attention to the order of the columns. (And pay attention to whether the sorting condition is constant)
            • The optimization of the join
              • Left join, index plus right table
              • Join right, index plus left table
              • Reduce the number of nestedLoop loops in join statement as much as possible, always use small result sets to drive large result sets, prioritize optimization of inner loop of nestedLoop, and ensure that the conditional fields on the driven table in join statement have been indexed. If you cannot guarantee that the conditional fields on the driven table in the join statement have been indexed and the memory resources are sufficient, you can slightly adjust the large JoinBuffer
            • General optimization
              • Single-key index: select the index with better filtering for the current Query
              • Combine indexes, using the order of the most filtered index fields as early as possible, and select as many fields as possible that are included in the index (and useful).
              • Analyze statistics and adjust the way query is written to select the most appropriate index

Query interception analysis

Query optimization

  • Always a small table drives a large table
    • Select constant exists (select constant from subtable where subtable. Id = primary table. Id); otherwise use in. Exists returns true/false and in returns the data table.
  • The order by optimization
    • Use index instead of filesort

      • Index is efficient and can be sorted when scanning the index itself. Filesort low efficiency
      • Order by index
        • The ORDER BY statement uses the index’s leftmost prefix match
        • The combination of the WHERE clause and the ORDER by clause condition column satisfies the left-most prefix match of the index
      • Filesort has two sorts
        • Multiway sort: used before mysql4.1, scan disk twice and finally get data. Read the row pointer and order by column, sort it, and then scan the sorted list and re-print the data from the list according to the values in the list
        • Single-way sort: After mysql4.1, read the required columns from disk, sort them in buffer by Order Derby columns, and then scan the sorted list for output. The efficiency is faster, avoiding double reads and changing random IO to sequential IO, but using more memory because each row is stored in buffer.
          • But there will be aHe that steals a chicken will not eat his riceIf sort_buffer is smaller than the size of the data to be retrieved, multiple IO will be performed and multiple merge will be required.
            • Avoid using select*
            • Increase sort_BUFFer_SIZE and max_LENGTH_FOR_SORt_DATA sizes
        • In general, single path due to multiple paths
    • Order by some cases using indexes

  • Group by optimization
    • Group by is essentially sorting and then grouping
    • Having where is higher than having is not having a qualification where can be qualified
    • Others are consistent with Order Derby

Slow Query logs

  • Slow query logs are disabled by default (slow query logs affect mysql performance).
  • Show variables like ‘%slow_query_log%’
  • Enable slow slow_query_log: set global slow_query_log=1
  • Show variables like ‘%long_query_time%’
  • Set global long_query_time=3 or show global variables like ‘%slow_query_log%’
  • Var /lib/mysql(Linux)
  • Show global status like ‘%slow_queries%’
  • Log analysis tool mysqlDumpslow

Batch insert data scripts

  1. Generates a random string function
  2. Random number generation
  3. Insert EMP table data function
  4. Insert the DEPT table data function
  5. Call the dept
  6. Call the emp

show profile

Mysql provides a way to analyze resource consumption for statement execution in the current session, which can be used for tuning measurements of SQL
  • Show variables like ‘profiling’ show variables like ‘profiling’
  • Turn this feature on (off by default) : set profiling=on;
  • After the SQL statement is executed, you can query the result: show profiles
  • SQL: show profile CPU,block, IO for query sql_id;
  • Everyday problems

Global logging (test environment, disabled in official environment)

Mysql locking mechanism

An overview of the

  • classification
    • Locking mechanism
      • Optimistic lock: modify first, save to determine whether it has been updated. Application level
      • Pessimistic lock: obtain the lock before operation modification. Database level
    • Data manipulation
      • Select * from users where id =1 lock in share mode select * from users where id =1 lock in share mode select * from users where id =1 lock in share mode
      • Write lock (exclusive lock, X lock) : Block other write locks and read locks until the current write operation is complete (select * from users where id =1 for update)
      • S and X locks can be table locks or row locks
    • Granularity of operation
      • Table lock: low cost, fast lock, large granularity, high lock conflict probability, low concurrency, suitable for read and write less situation
      • Row lock: Innodb storage engine, default option
      • Page lock: BDB storage engine
    • The lock mode
      • Record lock: a lock on a single row record that is attached to an index
      • Gap lock: Locks the range between records, but not including records
      • Next Key Lock: Record Lock + gap Lock, Lock the range between records, including records (left open and closed)
      • Intent locks
        • Innodb to support multi-granularity lock (table lock row lock) coexistence, the introduction of intentional lock, is table level lock
        • IS intended to share the lock
        • IX Intention exclusive lock
        • Before a transaction can request S and X locks for a row, it needs to obtain IS and IX locks for the corresponding table.
        • Action: Handles conflicts between row and table locks, used to indicate that a transaction is holding a lock on a row or is about to hold a lock. For example, if an X lock is placed on a row in a table, the table cannot be x-locked.
        • If you do not add an intentional lock to the table, you have to check whether there is a row lock on a certain row in the table when you lock the table, which greatly reduces efficiency.
      • Insert intent lock (Insert Intention Lock)
        • There is an insert intent Lock in Gap Lock, which is generated during insert operations.
        • It has two functions:
          • It is mutually exclusive with next-key to block the next-key lock, preventing data from being inserted, so that there is no phantom reading.
          • Insert intent locks are compatible with each other, allowing concurrent inserts of different data with the same gap

Two kinds of reading

  • Normal read (snapshot read) : Implemented by MVCC, this technology not only ensures repeatable reads of InnoDB, but also prevents phantom reads. But the data he reads is consistent, but the data is historical data.

    • Simple SELECT operations (not including select… Lock in share mode, select… for update)
  • Current read: Ensure that the data is consistent and the data read is the latest. And locks the read record, blocking other transactions while changing the same record to avoid security problems. Innodb provides next-key lock, which is a gap lock combined with a row lock.

    • The select… Lock in share mode, select… For update, INSERT, update, delete

Three lock

  • To see which tables are locked: show open tables
  • Show status like ‘table%’
  • Table lock (offset read)
    • Features: Biased to myISAM storage engine, low overhead, fast lock, no deadlock, lock granularity, lock conflict probability, small concurrency
    • Process 1 cannot read table1 and cannot add, delete, or alter table1. Process 2 can read other tables and block table1. Table1 can read from other tables
    • If process 1 writes to table1, process 1 cannot read or write to any table except table1. If process 2 adds, deletes, or modifs table1, process 2 blocks table1. Process 1 can read and write other tables in table1
    • When myISAM executes a query statement, it locks the read of the table involved, and automatically locks the write of the table involved when it executes an add, delete, or modify operation
    • Read locks block writes, but not reads. Write locks block both reads and writes
    • Myisam read-write lock scheduling is write-first, which is why MyISAM is not suitable for write-oriented table engines. Since no other thread can do anything after a lock is written, a large number of updates can make it difficult for a query to get a lock, causing persistent blocking
  • Row lock (offset)
    • Features: Prefer innoDB storage engine, high overhead, slow lock, deadlock, small lock granularity, low probability of lock conflict, high concurrency
    • The biggest difference between InnoDB and MyISam is transaction support and row-level locking
    • Row locks support transactions
      • Transactions and ACID properties
        • A transaction is a logical processing unit consisting of a set of SQL
        • Atomicity: A transaction is an atomic unit of operation in which all or none of the modifications to data are performed
        • Consistent: Data must be consistent at the start and completion of a transaction, which means that all relevant data rules must be applied to the modification of the transaction to maintain data integrity, and all internal data structures (B-tree indexes or bidirectional linked lists) must be correct at the end of a transaction.
        • Isolation: Database systems provide isolation mechanisms to ensure that transactions are executed in an “isolated” environment that is not affected by external concurrent operations, which means that intermediate states during transaction processing are not visible to the outside world.
        • Durable: Changes to data after transactions are durable and can be maintained even when the system fails
      • Transaction issues
        • Lost updates: when two or more transaction choose the same line, and then based on the initial value of the selected to update the bank, because each transaction don’t even know the existence of other transactions, the lost update problem occurs, the last of the updated covers other transaction (if a person did not commit the transaction, before another person cannot access the same file to avoid this problem)
        • Dirty read: Transaction A reads data that has been modified but not committed by transaction B and performs operations on the data. If transaction B rolls back, the data read by transaction A is invalid and does not meet consistency requirements. (A transaction changes the value of a data item, and the result is not committed, another transaction reads the value you changed, and then you roll back, the transaction reads it again, and it cannot read it, that is, the transaction reads the value you changed, and has not committed the value. This is a dirty read.)
        • Unrepeatable read: Transaction A starts the transaction to read the modified data A that has been committed by transaction B. Transaction A has not committed the transaction. At this time, transaction C reads the modified data A again and commits the data A, and transaction A reads data A again. (Read the modified and committed data)
        • Magic: A magic is when you query a transaction with the same SQL multiple times, and the result is that each query will find some data that has not been seen before. (Read the data added or deleted and submitted)
      • Transaction isolation Level (resolving issues arising from transactions)
        • Check the isolation level of the current database: show variables like ‘transaction_isolation’

        • Committed reads and repeatable reads are the two transaction isolation levels most commonly used in real development. The two are the main solution to the Concurrency transaction problem via the Multi Version Concurrency Control (MVCC).

          • Read Commited generates a snapshot of each SQL statement in a transaction before it is executed, preventing dirty reads while other concurrent transactions Read the data.
          • A Repeated Read is used to create a snapshot before the first query of a transaction, and then to Read the snapshot during a Repeated transaction, thus avoiding dirty and unrepeatable reads.
        • Repeatable read level unresolved phantom (Solve magic reading in InnoDB.”Gap lock plus row lock”)

      • A row lock becomes a table lock when an index fails
      • Clearance lock
        • When innoDB retrieves data with a range condition and asks for a shared or exclusive lock, innoDB locks the index entry of the data record that meets the condition. For the record whose key value is in the range but does not exist, innoDB calls it “gap”. Innodb also locks this gap, which is called a gap lock.
        • Harm: Non-existent key values are innocently locked, causing insert blocking
      • Lock a row: begin; Select XXX for update; Commit;
      • Show status like ‘innodb_row_lock%’;
      • Optimization Suggestions
  • Page lock: deadlocks occur when the opening and locking time is between table lock and row lock. The locking granularity is between table lock and row lock, and the concurrency is normal

A master-slave replication

  • Mysql versions are consistent and the background runs as a service
  • The primary and secondary configurations are in lower case under mysqld
  • In Linux, for example
    • The host and slave must ping each other first
    • The host modifies the my.conf configuration file
      • Unique ID of the primary server: server-id=1
      • Enabling binary Logging
        • The log – bin = / mysqlbin mysql path
      • Enabling error logging (optional)
        • The log – err = / mysqlerr mysql path
      • Root directory (optional)
        • Basedir = mysql path
      • Temporary directory (optional)
        • Tmpdir = mysql path
      • Data directory (optional)
        • Datadir = mysql/Data path
      • read-only=0
        • Both read and write
      • Setting databases that do not replicate (optional)
        • Binlog – ignore – db = XXX;
      • Setting the database to be replicated (Optional)
        • binlog-do-db=xxx
    • Modify the my.conf configuration file from the machine (on/off comment)
      • Unique ID of the secondary server: server-id=2
      • Enabling binary logging (optional)
        • The log – bin = / mysqlbin mysql path
    • Restarting the mysql service
    • Disable the firewall
    • The host executed SQL: Grant replication slave on.To ‘account ‘@’ secondary HOST IP’ IDENTIFIED by ‘password’, and then execute Flush PRIVILEGES
      • Querying the Master Status
        • Show master status, record file and position
      • After this operation is complete, do not operate mysql on the host
    • SQL: change master to master_host=’ host IP ‘,master_user=”,master_password=”,master_log_file=”,master_log_pos=”;
      • To start the slave replication function: start slave
      • show slave status\G
  • Stop slave replication: Run the stop slave command on the slave machine