On 26th March 2019, TiDB has released 3.0.0 beta. 1, the corresponding tiDB-Ansible version is 3.0.0 Beta. Compared to the 3.0.0 Beta, there are many improvements to system stability, ease of use, functionality, optimizer, statistics, and execution engine.

TiDB

  • The SQL optimizer
    • Sort Merge Join is supported to compute cartesian products
    • Skyline Pruning is supported with rules to prevent execution plans from relying too much on statistics
    • Support for Windows Functions provides
      • NTILE
      • LEADLAG
      • PERCENT_RANK
      • NTH_VALUE
      • CUME_DIST
      • FIRST_VALUELAST_VALUE
      • RANKDENSE_RANK
      • RANGE FRAMED
      • ROW FRAMED
      • ROW NUMBER
    • A class of statistics was added to indicate the correlation between the order of the column and the Handle column
  • SQL execution engine
    • Add built-in functions
      • JSON_QUOTE
      • JSON_ARRAY_APPEND
      • JSON_MERGE_PRESERVE
      • BENCHMARK
      • COALESCE
      • NAME_CONST
    • Optimize Chunk size based on query context to reduce SQL execution time and resource consumption of the cluster
  • Rights management
    • supportSET ROLECURRENT_ROLE
    • supportDROP ROLE
    • supportCREATE ROLE
  • Server
    • new/debug/zipHTTP interface to get information about the current TiDB instance
    • Support the use ofshow pump status/show drainer statusThe Drainer statement displays Pump/Drainer status
    • You can modify the Pump/Drainer status online using SQL statements
    • SQL text can be fingerprinted with HASH to facilitate tracing slow SQL
    • newlog_binSystem variable: 0 by default. The status of binlog management is enabled. Currently, you can only view the status
    • Supports sending the binlog policy through configuration file management
    • Support through memory tablesINFORMATION_SCHEMA.SLOW_QUERYQuerying Slow Logs
    • Change the MySQL Version displayed in TiDB from 5.7.10 to 5.7.25
    • Unified log format specification facilitates tool collection and analysis
    • Adding monitoring Itemshigh_error_rate_feedback_total, record the difference between the actual data amount and the estimated data amount of statistical information
    • Added QPS monitoring items for the Database dimension, which can be enabled through configuration items
  • DDL
    • increaseddl_error_count_limitGlobal variable. Default value: 512. Limits the number of DDL task retries
    • Support the ALTER ALGORITHMINPLACE/INSTANT
    • supportSHOW CREATE VIEWstatements
    • supportSHOW CREATE USERstatements

PD

  • Unified log format specification facilitates tool collection and analysis
  • The simulator
    • Support different stores can use different heartbeat intervals
    • Add a scenario for importing data
  • Hotspot scheduling is configurable
  • Add a monitoring item based on the store address to replace the original store ID
  • To optimize theGetStoresCost: Accelerates the Region inspection cycle
  • Added the interface for deleting the Tombstone Store

TiKV

  • Optimize the Coprocessor computing and execution framework, and complete the TableScan operator. TableScan improves the TableScan performance by 5% to 30%BatchRowsAnd columnBatchColumnThe definition of
    • implementationVectorLikeThe encoded and decoded data can be accessed in a uniform way
    • defineBatchExecutorInterface to convert requests toBatchExecutorThe method of
    • The implementation converts the expression tree to RPN format
    • The TableScan operator is implemented in Batch mode and accelerates calculation through vectorization
  • Unified log format specification facilitates tool collection and analysis
  • The Raw Read interface uses the Local Reader to Read data
  • Added Metrics for configuration information
  • Added Metrics for Key out of bounds
  • Added Panic or error notification options when encountering an out-of-bounds sweep error
  • Add Insert semantics, Prewrite succeeds only if the Key does not exist, and eliminate Batch Get
  • Batch System Uses a fairer Batch policy
  • Tikv-ctl supports Raw scan

Tools

  • TiDB-Binlog
    • Added Arbiter tool support to read binlog from Kafka to MySQL synchronization
    • Reparo supports filtering files that do not need to be synchronized
    • Synchronizes generated columns
  • lightning
    • Supports disabling a TiKV Periodic Level-1 compaction when a TiKV cluster dies at a rate of 2.1.4 or higher, and automatically compacts a Level-1 compaction when operating in import mode
    • According to thetable_concurrencyThe configuration item limits the number of import engines. The default value is 16 to prevent excessive use of the importer disk space
    • Supports saving intermediate SST to disk to reduce memory usage
    • Optimized the import performance of tiKV-importer, supporting separate import of data and indexes of large tables
    • CSV files can be imported
  • Sync-diff-inspector Data Synchronization comparison tool
    • TiDB statistics can be used to divide the chunk for comparison
    • Multiple columns are supported to divide the chunk for comparison

Ansible

  • N/A