The version of the configuration file is redis 4.0.14. Some parameters need to be processed.

1. General commands

1.1 ./redis-server /path/to/redis.conf

Start Redis and put the configuration file into effect

1.2 include /path/to/local.conf

Include can use more than one profile, and if the profile has the same value, the later one overwrites the previous one:

include /path/to/local.conf
include /path/to/other.conf
Copy the code

1.3 loadmodule /path/to/my_module.so

Loading modules doesn’t work as if we need to work on it, right

loadmodule /path/to/my_module.so
loadmodule /path/to/other_module.so
Copy the code

1.4 The bind 127.0.0.1

Bind IP addresses, preferably all for security

1.5 protected-mode yes

Protected mode, if protected mode is enabled and Redis has neither bind IP nor password set, then Redis will only accept 127.0.0.1 connections. The default is open

1.6 port 6379

Port, set to 0, does not listen

1.7 tcp-backlog 511

Tuning tcp_MAX_syn_backlog and somaxconn parameters for the Linux kernel

1.8 unixsocket /tmp/redis.sock unixsocketperm 700

Unix socket, default does not listen, useless

1.9 timeout 0

If the connection is idle for N seconds, close the connection

1.10 tcp-keepalive 300

If the value is not 0, the system sends TCP ACK packets to the client using the SO_KEEPALIVE interval to prevent the connection from being discarded. This is useful:

  • Detect dead connections.
  • If there are other network devices between the networks, the connection can be kept alive

Note that depending on the configuration of the kernel, it may take twice as long to close the connection using this mechanism.

The default value is 300.

2. Standard configuration

2.1 daemonize yes

By default redis does not run as a daemon. Note that if you enable daemon mode, /var/run/redis.pid will be generated and pid will be saved

2.2 supervised no

This is not quite understand, temporarily do not translate, understand after update

 If you run Redis from upstart or systemd, Redis can interact with your
 supervision tree. Options:
   supervised no      - no supervision interaction
   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
   supervised auto    - detect upstart or systemd method based on
                        UPSTART_JOB or NOTIFY_SOCKET environment variables
 Note: these supervision methods only signal "process is ready."
       They do not enable continuous liveness pings back to your supervisor.
Copy the code

2.3 pidfile /var/run/redis_6379.pid

Pid file path. Default value /var/run/redis. Pid If you are in non-daemon mode and the pidfile path is not configured, no PID file will be generated. If you are in daemon mode, pidfile will always be generated. If pidfile is not configured, the default path will be used.

2.4 loglevel notice

Specify the log level for the service:

  • debug
  • verbose
  • notice
  • warning

The default notice

2.5 logfile ""

Specifies the redis log file name and path. You can also set logFile “” to force Redis to output logs to standard output. Note that if you are using standard output and Redis is running in daemon mode, the log will be sent to /dev/null and gone

2.6 syslog-enabled no

To enable logging to the system logger, just set 'syslog-enabled' to yes,
and optionally update the other syslog parameters to suit your needs.
Copy the code

2.7 syslog-ident redis

Specify the syslog identity.

2.8 syslog-facility local0

Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.

2.9 databases 16

  • In clustered mode, database is 0
  • Set the number of databases. The default redis database is 0, you can select a different database, a redis connection to execute selcet, dbid range is 0~(databases-1), default is 0~15

2.10 always-show-logo yes

Funny configuration, always display redis logo

3. Snapshot related

3.1 Enabling RDB Persistence

To enable RDB persistence, save


save 900 1: If there is a change in 900 seconds, an RDB snapshot is taken to the disk. To disable RDB is to comment out the save line. If you have configured save “”, you can also disable RDB. Here are the default values

save 900 1
save 300 10
save 60 10000
Copy the code

3.2 stop-writes-on-bgsave-error yes

By default, if RDB snapshot is enabled and the last RDB snapshot save fails, Redis will stop receiving write requests. This is a strong way of telling the user that data persistence is not working properly, otherwise no one will know that there is a big problem. If the background save process works properly (the RDB file is saved properly), Redis will automatically allow write requests. However, if you have set up some monitoring of the Redis server, you may want to disable this feature so that Redis can continue processing write requests in the event of a disk failure. Just set stop-writes-on-bgsave-error yes

3.3 rdbcompression yes

Use the LZF algorithm to compress RDB files. If you want to save some CPU, set it to no.

3.4 rdbchecksum yes

Since Redis 5.0, a CRC64 check code (cyclic redundancy code) is set at the end of the RDB file. This provides some error correction, but at a 10% performance penalty, you can turn it off for maximum performance. If the RDB file verification function is disabled, the system automatically skips the verification if it cannot read the verification code. rdbchecksum yes

3.5 dbfilename dump.rdb

RDB file name

3.6 dir ./

Redis working directories, aof files, RDB files, and node.conf files in Redis cluster mode are all created in this directory.

3.7 slaveof <masterip> <masterport>

Master/slave replication. Use the Slaveof configuration to make the Redis instance a copy of another Redis server.

  • Redis’ master-slave replication is asynchronous, but you can set the master node to stop processing write requests when the master node cannot connect to a given number of secondary nodes
  • If the master/slave replication is disconnected for a short period of time, redis slave nodes can perform a partial resynchronization. You may need to set the replication backlog size
  • Master/slave replication automatically without user intervention. If there is a disconnection in the network, the secondary node automatically reconnects to the primary node and initiates a resynchronization.

3.8 masterauth <master-password>

If the primary node has a password, the secondary node must configure the password; otherwise, the primary node rejects the replication request.

3.9 slave-serve-stale-data yes

When the master/slave synchronization fails, the slave node has two behaviors:

  • Set to yes, the slave node can continue to respond to client requests.
  • If set to no, the slave node will report “SYNC with master in progress”, but the INFO and SALVEOF commands can be executed.

3.10 slave-read-only yes

You can set the slave node to handle write requests. Writing temporary data to slave nodes can sometimes be useful (because the data is deleted soon after resync) and can cause problems if mismatched. After 2.6, the value is read-only by default. Read-only This object is not designed to counter untrusted clients. Just afraid of the client using the wrong command. Read -only Some management commands are output in this mode. To limit this type of command, you can use rename-command to rename administrative commands

3.11 repl-diskless-sync no

Synchronization policy: Disk or Socket.

Warning: Diskless replication is experimental at this time. When a new slave node or a reconnected slave node cannot perform incremental synchronization, a full synchronization is required. An RDB file is transferred from the primary node to the secondary node in one of two ways:

  • Disk-backed: the primary node creates a new process to write the RDB file to the disk. The file is then progressively transmitted by the main process to multiple slave nodes
  • Diskless: the master node creates a new process that writes the RDB file directly to the socket of the slave node without touching the disk from beginning to end.

Using disk-backed replication, after the RDB file is generated, the master node creates a queue for each slave node to carry the RDB file until the transfer ends. With Diskless replication, once the RDB is transferred, as many slave nodes are connected at that time, it can only be transferred in parallel. If a new slave node initiates full synchronization at this time, it can only wait until all the previous RDB has been transferred. With Diskless replication, the master node waits a short (configurable) time before transferring, allowing multiple slave nodes to arrive and do the transfer in parallel. Diskless replication works better if the disk thief is slow and the network bandwidth is particularly good.

3.12 repl-diskless-sync-delay 5

If Diskless replication is enabled, you need to configure a delay for the master node to wait for all slave nodes to arrive. This is important because once the transfer starts, the master node cannot respond to a new full copy request from the slave node and has to wait in the queue for the next RDB transfer, so the master node has to wait for a while for all the full copies from the slave node to arrive. The delay time is in seconds. The default is 5 seconds. Turning this feature off sets it to 0 so that transfers always start immediately.

3.13 repl-ping-slave-period 10

The secondary node pings the primary node at regular intervals. The default is 10 seconds.

3.14 repl-timeout 60

This value is valid for all three scenarios:

  1. A large number of I/O operations and response times received from the master node.
  2. Timeout that the secondary node considers the primary node to be
  3. The timeout that the master node considers to be the slave node

Note that the value must be greater than the value of the repl-ping-slave-period, otherwise each heartbeat check will time out

3.15 repl-disable-tcp-nodelay no

Do I need to disable TCP_NODELAY after initiating SYNC from the socket of the node? If YES is selected, Redis uses a smaller TCppacket and a smaller bandwidth to send data to slave nodes. But this adds some latency to the master-slave replication, about 40 milliseconds, depending on the Linux kernel configuration. If you select No, the primary-secondary replication latency is slightly reduced, but the primary-secondary replication consumes more network bandwidth. By default we prefer low latency, but setting this option to yes may be a good solution if network conditions are poor.

3.16 repl-backlog-size 1mb

Set the primary/secondary replication backlog size. Backlog is a buffer. When the master node does not synchronize with the slave node, the master node caches the master/slave replication data into the Backlog buffer. When the slave node reconnects to the master node, the slave node can retrieve the incremental synchronization data from the buffer and perform incremental synchronization. The larger the backlog, the longer the disconnection from the node is allowed. The Backlog buffer is created only if at least one slave node is connected.

3.17 repl-backlog-ttl 3600

If the master node is no longer connected to the slave node, the slave backlog is freed. The time for this configuration starts when the node is disconnected. The units are seconds. Setting the backlog to 0 means that the backlog is never released.

3.18 slave-priority 100

This configuration is for sentry mode. When the primary node fails, the sentry selects the secondary node with the lowest priority to become the primary node. If this value is set to 0 for a redis node, the node will never become the primary node. The default value is 100.

3.19 Min-rabes-to-write 3 and Min-rabes-Max-LAG 10

If the number of online secondary nodes on the primary node is less than N within a lag second, the primary node stops receiving write requests. For example, the primary node accepts write requests only when at least three secondary nodes are online within 10 seconds. You can use the following configuration:

min-slaves-to-write 3
min-slaves-max-lag 10
Copy the code

Setting either of these two configurations to 0 disables this feature. It is disabled by default.

3.20 Slave - announce - IP 5.5.5.5slave-announce-port 1234

There are several ways to display the IP addresses and ports of secondary nodes that are currently online on the primary node. For example, the Info Replication section, or the ROLE command on the primary node.

# # The listed IP and address normally reported by a slave is obtained # in the following way: # # IP: The address is auto detected by checking the peer address # of the socket used by the slave to connect with the master. # # Port: The port is communicated by the slave during the replication # handshake, and is normally the port that the slave is using to # list for connections. # # However when port forwarding or Network Address Translation (NAT) is # used, the slave may be actually reachable via different IP and port # pairs. The following two options can be used by a slave in order to # report to its master a specific set of IP and port, so that both INFO # and ROLE will report those values. # # There is no need to use both the options if you need to Override just # the port or the IP address. # # slave-announce- IP 5.5.5.5 # slave-announce-port 1234Copy the code

4. Security

4.1 requirepass foobared

Set a password for Redis, because Redis is fast, attackers can try passwords 150,000 times a second, so your password must be very strong or it will be easy to brute force.

4.2 Rename -command CONFIG "" and rename -command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52

Completely kill a command with the rename -command CONFIG “” rename -command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 command can be changed, This way the crayon programmer won’t use dangerous commands.

Note that if you rename the command, you will need to rename everything from the node, otherwise there will be problems.

5. The client

5.1 maxclients 10000

Set the maximum number of clients at a time. The default value is 10000. Once the maximum value is reached, Redis closes all new links and sends an error “Max number of Clients readched” to the client.

6. Memory management

6.1 maxmemory

Set a maximum memory size. When the maximum memory is reached, Redis removes the key according to the selected memory flushing strategy. If Redis cannot remove a key based on a write-out policy, or if the write-out policy is Noeviction, Redis will start to return errors when clients send write requests and won’t use any more memory. However, read requests will continue to be supported. Note that if you have many slave nodes, the size of the output buffer should not be too large. Otherwise, the maximum size of the output buffer will be within the maxMemory range when the slave node initiates a full synchronization. For example, the maximum size is 4GB. You set the outputBuffer to 2 gigabytes so it’s going to fill up, and then you’re going to start flushing out keys, which is definitely not what we want.

6.2 maxmemory-policy noeviction

Memory flushing policy, which determines how to delete keys when Redis is full of memory. The default is noeviction.

  • Volatile – lRU -> Uses approximate LRU expulsion on expired keys
  • Allkeys-lru -> use approximate lru in allkeys
  • Volatile – lFU -> Uses approximate LFU expulsion on expired keys
  • Allkeys-lfu -> use approximate lfu in allkeys
  • Volatile -random -> Delete a random expired key
  • Allkeys-random -> delete a random key from allkeys
  • Volatile – TTL -> Delete whoever is about to expire
  • Noeviction -> Does not remove any key, error will be returned when memory is full

LRU means Least Recently Used LFU means Least Frequently Used

LRU, LFU and volatile- TTL are implemented based on approximate random algorithm. Note that redis will return an error when handling write requests if there is no appropriate key to delete.

Write specifically: set setnx setex append incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby getset mset Msetnx exec sort.

6.3 maxmemory-samples 5

LRU, LFU and minimal TTL algorithms are not exact algorithms, but are approximate algorithms (mostly to save memory), so you can weigh speed and accuracy yourself. By default redis checks 5 keys and selects the least recently used key. You can change this number. The default of 5 provides good results. You get very close to the real LRU with 10 but use more CPU, and faster with 3 but not as accurate.

B: LAZY FREEZING

Redis has two basic commands for deleting keys. One is DEL, which is a blocking delete. DEL causes Redis to stop processing new requests, and Then Redis recycles the memory of the object to be deleted by DEL in a synchronous manner. If the key corresponds to a very small object, the execution time of DEL is very short, approaching O(1) or O(log n). However, if the object corresponding to the key is large, Redis will block for a long time to complete the command. Redis also provides non-blocking delete commands, such as UNLINK(non-blocking DEL) and asynchronous delete policies: FLUSHALL and FLUSHDB, which allow memory reclamation in the background. These commands are executed in constant time. A new thread will gradually delete and free the memory in the background.

All the commands mentioned above are executed by users. The specific commands depend on the user’s scenario. But Redis itself can delete keys or flush entire in-memory databases for some reason. In addition to the user’s initiative to delete the key, Redis itself to delete the following scenarios:

  • Eviction data needs to be removed to clear space for new data after memory obsolescence is designed, or memory will burst.
  • Expire, when a key expires
  • Some marginal impact when key already exists. For example, to set an existing key, the old value needs to be deleted, and the new key needs to be set.
  • During the master/slave replication, a full synchronization is performed on the secondary node, and the memory data on the secondary node is flushed.

If you want the above four scenarios to use asynchronous deletion, you can use the following configuration:

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no
Copy the code

8. AOF

8.1 appendonly no

By default, Redis asynchronously dumps memory mirroring to disk (RDB). This is good enough, but some data can be lost if the machine goes down before you initiate dump. AOF(Append Only File) is an optional persistence policy to provide better data security. Using the default configuration, Redis loses at most one second of write data, and you can even increase the level to make Redis lose at most one write operation. AOF and RDB persistence can be enabled at the same time. If AOF is enabled, Redis will always load AOF files first because AOF provides higher availability.

8.2 appendfilename "appendonly.aof"

Aof file name.

8.3 appendfsync everysec

The fsync() call to the operating system tells the operating system to write the buffered data from the output buffer to disk. Some operating systems actually write to disk, some try to write, and some wait. Redis supports three methods:

  • No: do not actively call fsync() and let the operating system decide when to write to the disk
  • Always: Fsync () is called after every write operation, very slow, but with the highest data security.
  • Everysec: Calls fsync() once every second, a compromise strategy.

The default is Everysec, which is generally recommended, balancing speed and data security.

appendfsync always
appendfsync everysec
appendfsync no
Copy the code

8.4 no-appendfsync-on-rewrite no

When the AOF fsync policy is set to always or everysec, and a background save process (possibly RDB bgSave or AOF rewrite) is doing a lot of disk I/O, in some Linux configurations, Redis may make a call to fsync() that is too long. There is currently no way to fix this problem, which means that even if a background process does fsync, subsequent calls will be blocked if a previous process does fsync. To mitigate this problem, you can use the following configuration to not start a new process when you already have BGSAVE and BGREWRITEAOF doing fsync(). If redis cannot continue writing aof files while child processes are already doing bgSave or other disk operations, equivalent to appendSync None. In practice, this means that up to 30 seconds of logs can be lost. In other words, there is a loss of data, so be aware of this if you are extremely sensitive to data. If you have latency problems, set it to yes, otherwise set it to no. This ensures maximum data security and minimal data loss.

8.5 Auto-aof -rewrite-percentage 100 and auto-aof-rewrite-min-size 64mb

Automatically rewrite aOF files. When aOF files grow to a certain percentage, Redis overwrites the AOF file. Redis remembers the size of the aOF file after rewrite last time (the original AOF size is used if rewrite hasn’t happened since it was started). This size is compared to the size of the current AOF file. If the current size is greater than the specified percentage, do rewrite. Also, specify a minimum size. If the current AOF file is smaller than the minimum size, rewrite will not be triggered. This prevents unnecessary rewrite operations when the file is actually small but has already grown in percentage. If percentage is 0, the aof rewrite function is disabled

8.6 aof-load-truncated yes

When Redis starts, the AOF file will be loaded to restore the data to memory, but sometimes the AOF file may be corrupted, such as the end of the file is broken. This is usually caused by redis downtime, especially when the data=ordered option is not configured when the ext4 filesystem is mounted. In this case, Redis can either report an error directly or try to read the remaining readable AOF files.

If aOF -load-truncated=yes, Redis will still read the corrupted AOF file, but will print an error log to notify the user. If aof-load-truncated=no, Redis will report an error and refuse to start the service. You need to use the redis-check-aof tool to repair the AOF file and then start Redis. If the AOF file crashes while Redis is running, Redis will still report an error and exit. This option doesn’t save the situation.

8.7 aof-use-rdb-preamble no

When redis overwrites an AOF file, Redis can first read an RDB to speed up the rewrite. When this option is turned on, the overwritten AOF file consists of two parts: RDB file + AOF file. When the AOF file loaded on Redis starts with “redis”, the RDB file is loaded and then the remaining AOF files are read. This option is turned off by default,

9. The LUA script

9.1 lua-time-limit 5000

Represents the maximum number of milliseconds for a Lua script to execute. If the execution time reaches the maximum, Redis logs that the script has timed out and raises an error. When a SCRIPT times out, only the SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first command can stop a script that does not contain a write command. The second command is the only script that can stop timeout writing. Setting lua-time-limit to 0 or negative means you don’t limit the execution time and there are no warnings.

10. Redis Cluster

10.1 cluster-enabled yes

A normal Redis instance cannot be a member of a cluster. Only Node can. To add a node to a Redis cluster, set cluster-enabled to yes.

10.2 cluster-config-file nodes-6379.conf

All Cluster nodes have a cluster configuration file. This file is not intended for human editing, it is created by Redis himself. Each Redis-node uses a different cluster configuration file. Make sure that multiple Redis cluster nodes running on the same system use different Redis profiles and do not overwrite each other.

10.3 cluster-node-timeout 15000

Cluster node timeout is the maximum number of milliseconds for which a node does not respond. Most timeout limits are multiples of this value.

10.4 cluster-slave-validity-factor 10

After A primary node is down, if its secondary node A’s data is too old (unsynchronized for A long time), A will not trigger failover and it will not become the primary node. There is no easy way to policy the “data age” of a slave node. There are two ways to evaluate whether slave node data is too old:

  • If there are multiple slave nodes that can perform failover, they exchange information to select the slave node with the maximum replication offset (this indicates that the slave node replicates more data from the master node). Each slave node calculates its own offset level and will delay failover for a certain amount of time, depending on its offset level.
  • Each slave node counts the last interaction with the master node. This interaction can be the last ping, or the write command pushed by the master node, or the last time the master node was disconnected. If it has been too long since the last interaction, the slave node does not initiate a failover at all.

The second point is that users can adjust. If the last interaction between a slave node and the master node is longer than (node-timeout * slave-validity-factor) + rep-ping -slave-period, failover does not occur on the slave node. For example, if node-timeout=30 seconds, slave-validity-factor=10, and rep-ping-slave-period =10 seconds, if 310 seconds have elapsed since the last interaction between the slave node and the master node, the slave node does not perform failover. Increasing slave-validity-factor will allow the slave node to be promoted to master if it holds too old data. Decreasing this value may result in the slave node never being promoted to master. For maximum availability, set slave-validity-factor to 0, so that the slave node ignores the last interaction with the master node and always tries to perform a failover. (But will still delay the election operation)

10.5 cluster-migration-barrier 1

Slave nodes can be migrated to orphan master nodes (which have no slave nodes). Slave nodes migrate to other orphan master nodes only when the original master node has at least N slave nodes. This given number N is referred to as the migration-barrier. Migration barrier=1 indicates that if the primary node has two secondary nodes, one of the secondary nodes can be migrated when orphan primary nodes appear in the cluster. To disable migration from a node, set this value to a large value, such as 999. This value can only be set to 0 in debug mode. Do not set this value arbitrarily in production.

10.6 cluster-require-full-coverage yes

By default, the Redis Cluster disables queries if it finds that at least one hash slot has not been allocated. ** This way, if part of the cluster goes down, the whole cluster becomes unavailable. ** Only if all other hash slots have been allocated. You may need a subset of the cluster to continue providing services. To do so, simply set cluster-require-full-coverage No

10.7 cluster-slave-no-failover no

If this option is set to yes, the secondary node will never become the primary node if the primary node goes down. However, the primary node can still perform regular failover. This configuration can be useful in multi-data center scenarios, where we hope that one data center will never be upgraded to a master node, or the master node will migrate to another data center, which can be troublesome.

11. CLUSTER DOCKER/NAT support

11.1 The Cluster Proactively Informs the IP address

In certain deployment scenarios, automatic redis Cluster node address discovery fails because the address is NAT or the port is forwarded (in Docker containers). In order for redis Cluster to work in this environment, you need to statically configure the address and port as follows:

  • Cluster – announce – IP 10.10.10.10
  • cluster-announce-port 6379
  • cluster-announce-bus-port 6380

If the configuration file does not contain these configuration items, redisCluster uses the standard automatic discovery mechanism.

12. SLOW LOG

12.1 slowlog-log-slower-than 10000andslowlog-max-len 128

The redis slow log is a system record of slow operations that have taken longer than a given period of time. Execution time does not include I/O operation time. You can configure slow log with two parameters:

  • Slowlog-log-slower than 10000: Slower than 10000 slowlog-log-slower than 10000: Slower than 10000 slowlog-log-slower than 10000 is equal to 1 second
  • Slowlog-max-len 128: slow length. If the command is larger than 128, the old one is gone

This queue can be RESET with the SLOWLOG RESET command.

13. LATENCY MONITOR

13.1 latency-monitor-threshold 0

The Redis delay monitoring system samples a number of commands at run time to help the user analyze why Redis is stalling. You can print some views and reports using the LATENCY command. Redis will only log commands that are longer than the specified number of milliseconds. To turn this function off, set latency-monitor-threshold to 0. By default, Monitor is off. Do not leave Monitor on all the time without latency issues, as this feature can have a significant impact on performance. To enable this feature, run the following command: CONFIG SET latency monitor-threshold

14. Event notification

14.1 notify-keyspace-events ""

Redis can notify the Pub /sub client when an event occurs for a particular key space. If event notification is enabled and a client performs a del operation on key”foo”, two messages will be pushed via pub/sub:

  • PUBLISH keyspace@0:foo del
  • PUBLISH keyevent@0:del foo

You can select the event level for redis notifications. All levels are marked with a single character:

  • K Keyspace events, published with keyspace@ prefix.
  • E Keyevent events, published with keyevent@ prefix.
  • g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, …
  • $ String commands
  • l List commands
  • s Set commands
  • h Hash commands
  • z Sorted set commands
  • x Expired events (events generated every time a key expires)
  • e Evicted events (events generated when a key is evicted for maxmemory)
  • A Alias for g$lshzxe, so that the “AKE” string means all the events.

The notify-keyspace-events parameter accepts multiple characters, or zero strings. If you set notify-keyspace-events to an empty string, you disable notifications.

15. Advanced Settings

15.1 Ziplist-related Configurations

When data volumes are small, hashes can use a more efficient data structure. This threshold can be set using the following configuration:

hash-max-ziplist-entries 512
hash-max-ziplist-value 64
Copy the code

15.2 list-max-ziplist-size -2

Lists can also be encoded in a special way to save memory. Each node in a Quicklist is a Ziplist. This parameter controls the size of each Ziplist. If the value is set to a positive value, the maximum number of nodes in a Quicklist ziplist cannot exceed the configured value. If the value is negative, the ziplist length is specified:

  • -5: max size: 64 Kb <– not recommended for normal workloads
  • -4: max size: 32 Kb <– not recommended
  • -3: max size: 16 Kb <– probably not recommended
  • -2: max size: 8 Kb <– good
  • -1: max size: 4 Kb <– good

Configurations -2 and -1 have the highest performance

15.3 list-compress-depth 0

Lists can also be compressed. At the bottom of the list is a bidirectional linked list. The compression depth represents how many nodes other than head and tail will not be compressed. Head and tail nodes are never compressed.

  • 0: disables compression
  • 1: all internal nodes except head and tail will be compressed [head]->node->node->… ->node->[tail] [head], [tail] is not compressed; Internal nodes are compressed.
  • 2: [head]->[next]->node->node->… ->node->[prev]->[tail] head, head-> Next, tail->prev and tail will not be compressed, other nodes between them will be compressed.
  • 3: [head]->[next]->[next]->node->node->… ->node->[prev]->[prev]->[tail]

And so on

15.4 set-max-intset-entries 512

Set also supports internal optimizations. The underlying implementation of a set uses intset when all of its internal elements are decimal integers up to 64 bits, and intset to dict when the added elements are larger than set-max-intset-entries.

15.5 zset-max-ziplist-entries 128andzset-max-ziplist-value 64

When zset internal elements are greater than 128, or value is greater than 64 bytes, the underlying zset will no longer use Ziplist

15.6 hll-sparse-max-bytes 3000

HyperLogLog sparse representation is byte limited. This limitation includes 16-byte headers. When HyperLogLog uses a sparse representation, if this limit is reached, it will be converted to a compact representation. It makes no sense to set this value to a value greater than 16000, where compact is more memory friendly. The recommended value is 0 to 3000, which reduces the PFADD command execution time and saves space. If memory space is tighter than CPU resources, you can increase this value to 10000.

15.7 activerehashing yes

Dynamic Rehash uses one millisecond in every 100 milliseconds to actively rehash the Redis hash table. The default Redis hash implementation uses a lazy rehash mode: the more you perform on the hash table, the more rehash steps will be executed, so if the service is idle, the rehash operation will never end, and the hash table will take up more memory. By default, dynamic rehash is done 10 times per second to free memory.

15.8 Client Transfer Buffer Configuration

The client output buffer limit is used to forcibly disconnect the client. If the client does not finish reading data from the buffer in time, Redis will think that the client may break down and will disconnect the connection.

This limit can be set for three different situations:

  • Normal -> Normal clients include MONITOR clients
  • Slave -> Slave node clients
  • pubsub -> pub/sub clients

Client-output-buffer-limit






  • Hard limit: If the buffer reaches the hard limit, Redis disconnects the client
  • Soft limit & soft seconds: If the buffer reaches the soft limit value, the connection between Redis and the client will retain soft seconds

By default, Normal clients do not have this restriction because redis does not actively push data to Normal clients. If you set three zeros to unrestricted, always connect. But that might drain the memory.

client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub  32mb 8mb 60Copy the code

15.9 client-query-buffer-limit 1gb

Client Query Buffer Buffers new commands. A fixed number is used by default to avoid protocol desynchronization.

15.10 proto-max-bulk-len 512mb

In the redis protocol, most requests are strings, and the default value is no larger than 512mb. Of course, you can change this limit.

15.11 hz 10

Redis calls internal functions to perform many background tasks, such as closing timeout connections, cleaning up expired keys that have never been requested, and so on. Not all background tasks are executed at the same frequency, and Redis uses the Hz parameter to determine how often the task is executed. The default Hz is 10. Increasing this value will consume more CPU when Redis is idle, but it will also make Redis more proactive in cleaning up stale keys and more precise in cleaning up stale connections. The value ranges from 1 to 500, although a value greater than 100 is not recommended. Most users should use the default, or up to 100.

15.12 aof-rewrite-incremental-fsync yes

When a child process overwrites an AOF file, redis will voluntarily commit every 32MB of data to the file if this feature is enabled. This incremental submission of files to disk avoids large delay spikes.

15.13 LFU tuning

Redis LFU elimination strategy can be tuned. The LFU counter of each key has only 8 keys and the maximum value is 255, so Redis uses a probability-based logarithmic growth algorithm and does not counter+1 every time the key is accessed. When a key is accessed, it does counter+1 as follows:

  1. Generate a random number R between 0 and 1
  2. Calculate probability P=1/(old_value* lFU_log_factor +1)
  3. R < p + 1 is counter

The default lfu-log-factor is 10. Below is the growth rate under different factors, as you can see, the smaller the factor, the faster the growth rate:

+--------+------------+------------+------------+------------+------------+ | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | +--------+------------+------------+------------+------------+------------+ | 0 | 104 | 255 | 255 255 | 255 | | + -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- + | 1 18 49 | | 255 | | 255 | 255 | + -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + 10 10 18 | | | | 142 | | 255 | 255 + + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- + | 100 | 8 11 49 | | 143 | | 255 | + + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- + note: from the above-mentioned conclusion is executing the following command: Redis -benchmark -n 1000000 incr foo redis-cli object freq foo Note that 2: counter starts with 5, otherwise key will soon be obsoleteCopy the code

I don’t quite understand lfu-decay-time, which will be added after thorough research

lfu-log-factor 10 lfu-decay-time 1

16. Dynamic defragmentation (in experimental stage)

# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested # even in production and manually tested by multiple engineers for some # time. # # What is active defragmentation? # ------------------------------- # # Active (online) defragmentation allows a Redis server to compact the # spaces left  between small allocations and deallocations of data in memory, # thus allowing to reclaim back memory. # # Fragmentation is a natural process that happens with every allocator (but # less so with Jemalloc, fortunately) and certain workloads. Normally a server # restart is needed in order to lower the fragmentation, or at least to flush # away all the data and create it again. However thanks to this feature # implemented by Oran Agra For Redis 4.0 this process can happen at Runtime # in an "hot" way while the server is running. # # Basically when the fragmentation is over a certain level (see the # configuration options below) Redis will start to create new copies of the # values in contiguous memory regions by exploiting certain specific Jemalloc # features (in order to understand if an allocation is causing fragmentation # and to allocate it in a  better place), and at the same time, will release the # old copies of the data. This process, repeated incrementally for all the keys # will cause the fragmentation to drop back to normal values. # # Important things to understand: # # 1. This feature is disabled by default, and only works if you compiled Redis # to use the copy of Jemalloc we ship with the source code of Redis. # This is the default with Linux builds. # # 2. You never need to enable this feature if you don't have fragmentation # issues. # # 3.  Once you experience fragmentation, you can enable this feature when # needed with the command "CONFIG SET activedefrag yes". # # The configuration parameters are able to fine tune the behavior of the # defragmentation process. If you are not sure about what they mean  it is # a good idea to leave the defaults untouched. # Enabled active defragmentation # activedefrag yes # Minimum amount of fragmentation waste to start active defrag # active-defrag-ignore-bytes 100mb # Minimum percentage of fragmentation to start active defrag # active-defrag-threshold-lower 10 # Maximum percentage of fragmentation at which we use maximum effort # active-defrag-threshold-upper 100 # Minimal effort for defrag in CPU percentage # active-defrag-cycle-min 25 # Maximal effort for defrag in CPU percentage # active-defrag-cycle-max 75Copy the code