Cache (Cache)
TDengine adopts time-driven cache management strategy (FIRST-in-first-out (FIFO), also known as write-driven cache management mechanism. Unlike read-driven data caching (least-recent -Use (LRU)), this strategy directly stores the recently written data in the system cache. When the cache reaches the threshold, the earliest data is written to disk in batches. Generally speaking, for the use of Internet of Things data, users are most concerned about the recently generated data, namely the current state. TDengine takes full advantage of this feature by storing recently arrived (current state) data in the cache.
TDengine provides users with millisecond data retrieval capabilities through query functions. Directly save the recently arrived data in the cache, which can respond to the user’s query analysis for the latest one or a batch of data more quickly, and provide faster database query response capability on the whole. In this sense, TDengine can be used as a data cache by setting the appropriate configuration parameters, without the need to deploy additional caching systems, effectively simplifying the system architecture and reducing the cost of operation and maintenance. Note that after the TDengine restarts, the system cache will be cleared, and the previously cached data will be written to disk in batches. The cached data will not be reloaded into the cache like the special key-value cache system.
TDengine allocates a fixed amount of memory as cache space, which can be configured based on application requirements and hardware resources. With appropriate cache space Settings, TDengine provides extremely high performance write and query support. Each virtual node in TDengine is created with a separate cache pool. Each virtual node manages its own cache pool. Different virtual nodes do not share a cache pool. All the tables of each virtual node share the cache pool of the virtual node.
TDengine manages the memory pool in blocks, where data is stored in columns. A VNode’s memory pool is allocated in blocks when a VNode is created, and each block is managed on a first-in, first-out basis. The memory blocks required by a table are allocated from the vNode memory pool. The size of the blocks is determined by the system configuration parameter cache. The maximum number of memory blocks per table is determined by configuration parameter tblocks, and the average number of memory blocks per table is determined by configuration parameter abLOCKS. Therefore, the total memory size for a VNode is cacheablockstables. The memory block parameter cache should not be too small. To be efficient, a cache block must store at least dozens of records. The minimum abLOCKS parameter is 2, ensuring that each table allocates at least two memory blocks on average.
You can use the last_row function to quickly retrieve the last record of a table or super table, which makes it easy to display real-time status of devices or collect values on a large screen. Such as:
select last_row(degree) from thermometer where location='beijing';
Copy the code
This SQL statement will get the last recorded temperature values of all sensors located in Beijing.