Redis
A master-slave replication
What if there is not enough memory when new data enters Redis
- When Redis uses memory to store data, it calls freeMemoryIfNeedled () to check whether the memory is sufficient. If the memory does not meet the minimum storage requirements for newly added data, Redis temporarily deletes a write of data to clear up storage space for the current instruction. This strategy is called eviction algorithm
- The process of evicting data is not 100% clear enough space. If it is not repeated successfully, or if the storage requirements are not met, an exception is thrown and MaxMemory is set to 50-60 percent of physical memory
Master/slave replication
2.1. Synchronization stage
-
Master indicates the synchronization phase
-
If the volume of master data is large, avoid traffic peak hours during data synchronization to avoid master congestion in early mornings, which affects services.
-
The size of the replication buffer is not set properly, resulting in data overflow. If the full replication period is too long and data is already lost during partial replication, a second full replication must be performed, causing the slave to enter an infinite loop
-
Through real – blocklog – size? mb
-
The proportion of memory used by the master machine should not be too large. It is recommended that 50-70 percent of the memory be used for BGSave and replication buffer creation
-
-
Savle description of synchronization phase
-
You are advised to disable external services during slave full or partial replication to prevent server response congestion or data synchronization
-
Slave-server-table-date yes|no
-
During data synchronization, the master sends commands to the slave client
-
Multiple slaves synchronize data requests to the master at the same time. The master sends too many RDB files, causing a huge impact on the bandwidth. If the bandwidth of the master is insufficient, you need to synchronize data based on service requirements
-
2.2. Replication Phase
-
Partial replication of the command propagation phase
- Network disconnection occurred during command propagation
- The network is disconnected and disconnected
- Short time network interruption, partial replication
- The network is interrupted during production, and full replication is performed
- Network disconnection occurred during command propagation
-
A partial copy of the three elements
- Server RUN ID (RUN ID)
- A runtime identification number for each server. A server can generate multiple ids
- The ID consists of 40 random hexadecimal characters
- The run ID is used to transfer identity between servers. If you want to perform two operations on the same server, each operation must carry the corresponding running ID for easy identification
- The run ID is automatically generated when each server is started. When the master first connects to slvae, it sends its run ID to the slave. The slave saves the secondary ID and can view the run ID of the node through the info server command
- Replication backlog buffer for the primary server
- Consists of offsets and byte values
- The master records the offset corresponding to the sent message and the slave records the offser corresponding to the received message
- Buffer is a FIFO queue, due to the storage server out of order, each propagation command, master the spread of the command will be recorded and stored in the buffer, the default buffer size of 1 m, when the team element is greater than the number of queue length, the first team of elements will be pop up, new elements will be put into the queue
- Replication offset of the primary and secondary servers
- The offset determines whether the data sent by the master to the slave is replicated in full or in part
- Server RUN ID (RUN ID)
2.3. Workflow
- First the slave sends a command
psync
? -1 Is the data sent because we do not know and? And 1 - After receiving the command, the master executes the bgSave command to generate an RDB file, record the current replication offset, and send it to the slave and +FULLRESYNC runid offset. Send the RDB file to the slave through the Socket
- The slave receives +FULLRESYNC, saves the master rUNId and offset, clears the current slave data, receives the RDB file through the Socket, and recovers the RDB data
- The write operation will affect the offset. The slave will keep sending instructions (heartbeat mechanism), so the backlogs buffer is used to store data. After the full replication is complete, the partial replication is performed
- Slave Send command
psync
Runid offset, the master checks whether the runid matches the master and whether the offset matches the replication buffer after receiving the command- Full copy is performed if either rUNId or offset is not satisfied
- If rUNId or offset pass the check, the offset is the same as the offset
- If not, send +CONTINUE offset to send data from offset to offset in the replication buffer through offset
- The slave receives the +CONTINUE command to save the master offset. After receiving the information, it runs bgWriteAOF to restore data
3. Heartbeat mechanism
-
In the command transmission phase, the master and slave exchange information and use the heartbeat mechanism for maintenance to keep them online
- Master Internal heartbeat command: PING
- Period: the value is determined by real-ping-slave-period. The default value is 10s
- Function: Checks whether the slave is online
- Query: INFO replication obtains the interval between the last connection to slave. If the lag value is 0,1, it is considered normal
- Slave Heartbeat task
- Internal instruction: REPLCONF ACK (offset)
- Cycle: 1 s
- Function 1: Reports the replication offset of the slave and obtains the latest data change instructions
- Function 2: Check whether the master is online
- Master Internal heartbeat command: PING
-
Precautions during the heartbeat phase
-
When most of the slaves are disconnected or the latency is too high, the master rejects all information synchronization operations to ensure data stability
-
Min-slaves-to-write 2
Min-slave-max-lag 8
- If the number of slaves is less than two or the delay of all slaves is greater than or equal to 8s, the master write function is forcibly disabled to stop data synchronization
-
The slave sends the REPLCONF ACK command to confirm the slave quantity
-
Slave The slave sends the REPLCONF ACK command to confirm the delay
-
Common problems with master/slave replication
4.1. Frequent full replication
As the system runs, the data volume of the master increases. Once the master restarts, the RUNID changes, causing full replication of all slaves
- Internal optimization scheme
- The master_REPLID variable is created internally for the master using the same policy as rUNId and is 41 bits long and sent to all slaves
- When the master is shutdown, run the shutdown save command for RDB persistence, and save the runid and offset to the RDB file
- repl-id repl-offset
- You can run the redis-check-rdb command to view the information
- After the master restarts, load the RDB file and restore data
- After the restart, add the repl-ID and repl-offset saved in the RDB file to the memory
- master_repl_id = repl-id master_repl_offset = real-offset
- You can view information by running the INFO command
- Function: The local machine saves the last runid and restores the value after restart. All slaves consider the runiD as the previous master
4.2 Frequent full Replication 2
- Symptom: The network is interrupted and the slave continuously provides services
- Cause: The replication buffer is too small. After the network is disconnected, the offset of the slave exceeds the threshold, triggering full replication
- End result: The slave repeatedly performs full replication
- Solution, modify the replication buffer size
repl-blacklog-size ? mb
- The recommended Settings are as follows
- Calculates the average reconnection duration from the master to slave second
- Write_size_per_second getmaster average number of write commands per second
- Optimal copy buffer space = 2 * second * write_size_per_second
4.3 Frequent network interruption
- Symptom: The CPU usage of the master is too high or the slave is frequently disconnected
- Question why
- The slave sends the REPLCONF ACK command to the master every second
- When a slave receives a slow query (keys *, hgetall, etc.), it consumes a lot of CPU
- When the master invokes replication timing function replicationCron() every second, the slave does not respond for a long time
- End result: The master’s various resources (output buffers, bandwidth, links, etc.) are heavily occupied
- Solution: Determine whether to release the slave by setting a proper timeout period
repl-timeout seconds
- This parameter defines the default timeout threshold of 60 seconds. If the timeout threshold exceeds 60 seconds, the slave is released
4.4 Frequent network interruption 2
- Symptom: The slave and master are disconnected
- Cause of the problem:
- The frequency at which the master sends the ping command is low
- Master sets the supermarket time to be short
- The ping command lost packets on the network. Procedure
- The solution
- Increase the frequency of ping commands
repl-ping-slave-preiod seconds
- Timeout repl-time The time of the ping command must be at least 5-10 times that of the ping command frequency. Otherwise, the slave can easily determine the timeout
4.5 inconsistent data
- Fault description: Data obtained by multiple slaves is not synchronized
- Cause: The network information is not synchronized, and the data transmission is delayed
- Solution:
- Optimize the network environment between the master and the slave. Usually, the master and slave are deployed in the same equipment room. Note this phenomenon when the Aliyun server is used
- Monitor the master and slave node delay. If the slave delay is too large, the masking program’s data access to the slave is displayed
slave-serve-stale-date yes|no
- Only respond to a few commands such as info and slaveof (use with caution unless data consistency is very high)
The guard mode
A, definitions,
Sentinel is a distributed system used to monitor each server of master and slave. When a failure occurs, a new master is selected by voting mechanism and all slaves are connected to the new master. Sentinel also monitors multiple servers
- monitoring
- Constantly check that the master and slave are working properly
- Master survival detection, master and slave running status monitoring
- Notification: Sends notifications to others (sentinels, clients) when a monitored server has a problem
- Automatic ballooning transfer: Disconnect the master from the slave, select one slave as the master, connect the other slaves to the new master, and inform the client of the new server address
- Sentinel is also a Redis server, which does not provide services, and usually has an odd number of sentinels
Second, the configuration
- Sentinel_number Specifies the number of sentinels to vote
Sentinel monitoer master_name master_host master_port master_number
- Set Determines how long the server is down and whether to perform a primary/secondary switchover
Sentinel down-after-milliseconds master_name million_seconds
- Set the maximum failover timeout period
sentinel failover_timeout master_name million_seconds
- The number of slaves to synchronize data at the same time after the master/slave switchover is set. The larger the value is, the higher the network resources are required, the smaller the value is, and the longer the synchronization time is
Sentinel parallel-syncs master_name sync_slave_number
Three, the working principle
Take the initiative to switch
- The sentry goes through three stages during the master-slave switch
- monitoring
- notice
- failover
- Phase 1: Synchronizes status information of each node
- Get the status of each sentinel (online or not)
- Gets the master state
- Master attribute
- runid
- Role: master
- Detailed information about each slave
- Master attribute
- Get the status of all slaves (based on the slave information in master)
- Slave information
- runid
- Role: slave
- master_host,master_port
- offset
- .
- Slave information
- Phase 2: Send Hello instruction to realize information synchronization
- Stage 3:
- If the sentry keeps sending hello to the machine and the machine hangs up, the machine will be marked with “flags:SRI_S_DOWN”, and then the sentries will tell the machine to hang up, and then all the sentries will send Hello to determine, in the terminal,
+sDOWN
Indicates that a sentry thinks a machine is down,+oDown
It means all the sentries think the machine is down - The sentries vote among themselves, and the sentry with the most votes becomes the leader. If the sentry fails, the sentry will loop until the election is successful.
- Select the standby master from the server list
- Not online
- Slow response
- Disconnected from original master for a long time
- The priority principle
- priority
- offset
- runid
- Send command sentinel
- Send Slaveof no one to the new master
- Sends Slaveof the new masterIP port to other slaves
- If the sentry keeps sending hello to the machine and the machine hangs up, the machine will be marked with “flags:SRI_S_DOWN”, and then the sentries will tell the machine to hang up, and then all the sentries will send Hello to determine, in the terminal,
Problem solving
1. Cache preheating
- Troubleshoot problems
- High number of requests
- The data throughput between the master and slave is large, and the data synchronization operation frequency is high
- Preparatory work
- Routine data access records: Collects statistics on frequently accessed hotspot data
- The data retention queue is constructed by LRU data deletion strategy
- Preparations:
- The data in the statistics result is classified. Redis preferentially loads hotspot data with a higher level
- The distributed server is used to read data at the same time to speed up the data loading process
- The primary and secondary hotspot data are preheated simultaneously
- Implementation:
- The data warm-up process is triggered by a fixed script
- If conditions permit, use CDN content distribution network, the effect will be better
- Summary: Cache preheating is to load the relevant cache data directly into the cache system before the system starts. Avoid the user in the user request, first query database, and then the data cache problem! Users directly query cached data that has been preheated.
Cache avalanche
-
Database server Crash (1)
- The system runs smoothly in the process, suddenly the database connection quantity surges
- The application server cannot process requests in a timely manner
- A whopping 408,500 pages appear
- The customer refreshes the page repeatedly for data
- Database crash
- Restarting the application server fails
- Redis server crashes
- The Redis cluster crashes
- After the database is restarted, it is knocked down by instantaneous traffic again
-
Troubleshoot problems
- More sets of keys in the cache expire over a shorter period of time
- During this period, redis fails to access expired data and retrieves data from the database
- The database receives a large number of requests simultaneously and cannot process them in a timely manner
- A large number of reIDS requests are being squeezed and timeouts begin to occur
- The database crashes due to the database traffic surge
- No data in the cache is still available after the restart
- The Redis server resources are heavily occupied and the Redis server crashes
- The Redis cluster collapses
- The application server fails to receive timely response requests from the database. As a result, the number of requests from clients increases, and the application server crashes
- The application server, Redis and database are all restarted, but the effect is not ideal
-
Solution (Channel)
- More page static processing
- Build a multi-level cache architecture
- Nginx cache + Redis cache + EhCache
- Check the Mysql authentication time and optimize the service
- Troubleshoot database bottlenecks, such as timeout queries and time-consuming transactions
- Disaster warning system
- Monitor redis server performance metrics
- CPU usage
- Memory capacity
- Query the average response time
- The number of threads
- Monitor redis server performance metrics
- Traffic limiting and degradation
- Sacrifice some customer experience for a short period of time, restrict access to some requests, reduce the pressure on the application server, and then gradually open the service after the service is running at a low speed
-
Solution (technique)
- Switch between LRU and LFU
- Data validity period policy adjustment
- The validity period of service data is 90 minutes for class A, 80 minutes for class B, and 70 minutes for class C
- Expiration time is in the form of fixed time + random value, diluting the number of expired keys in the set
- Super hot data uses a permanent key
- Regular maintenance (automatic + manual)
- To do traffic analysis of expired data, confirm whether to cover up, with traffic statistics, do hot data delay
-
lock
-
Summary: Cache avalanche means that the number of instant expiration is too large, which causes pressure on the database server. If the concentration of expiration time can be effectively avoided, the occurrence of avalanche can be effectively solved (about 40%). It can be used in one phase with other strategies, and the running data of the server can be monitored and quickly adjusted according to the running records
Cache breakdown
- Database server Crash (2)
- The system is running smoothly
- The number of database connections shot up
- Redis server does not have a large number of keys expired
- Redis memory is smooth with no fluctuations
- The CPU of the Redis server is normal
- Database crash
- Troubleshoot problems
- Redis a key expired and received a lot of traffic
- Multiple data requests from the server directly pressed redis, all failed to hit
- Redis initiates a large number of accesses to the same data in the database in a short period of time
- Solution (technique)
- preset
- Taking e-commerce as an example, each commodity inspection will designate a number of main products according to the store level, and increase the expiration time of such information keys during the shopping festival
- The adjustment
- Monitor the traffic volume and extend the expiration period or set the data as a permanent key for the data that has a natural traffic surge
- Background Data Refresh
- Start scheduled tasks and refresh data validity periods before peak hours to ensure data loss
- The second level cache
- Set different failure time, guarantee will not be eliminated at the same time
- lock
- Distributed lock, to prevent breakdown, but also pay attention to the performance bottleneck, caution
- preset
4. Performance monitoring indicators
-
Performance specifications: Perfromance
-
Average time to respond to requests
-
Latency
-
Average total number of requests processed per second
-
Instantaneous_ops_per_sec
-
Cache query hit ratio
-
Hit_rate(calculated)
-
-
Memory specifications: Memory
-
Current memory usage
-
Used_memory
-
Memory fragmentation rate (relational defragmentation)
-
Mem_fragmentation_ratio
-
The total number of keys removed to avoid memory overflow
-
Evicted_keys
-
Number of clients affected based on blocking operations (BLPOP, etc.)
-
Blocked_clients
-
-
Basic activity indicator: Basic_activity
-
Total number of current client connections
-
Connected_clients
-
Total number of current connected slaves
-
Connected_slaves
-
The number of seconds since the last master/slave exchange
-
Master_last_io_seconds_ago
-
The total number of key
-
Keyspace
-
-
Persistence indicator: Persistence
-
The last RDB persistence time of the current server
-
rdb_last_save_time
-
Total number of data changes since the last RDB persistence of the current server
-
rdb_changes_since_last_save
-
-
Error indicator: Error
-
The total number of clients denied connections
-
Rejected_connections
-
Total number of key misses
-
Keyspace_misses
-
The number of seconds that the master/slave is disconnected
-
Master_link_down_since_seconds
-
V. Monitoring methods
- tool
- Cloud Insight Redis
- Prometheus
- Redis – stat
- Faina redis –
- RedisLive
- zabbix
- The command
- benchmark
- redis-cli
- monitor
- slowlog