“This is the 17th day of my participation in the Gwen Challenge in November. See details of the event: The Last Gwen Challenge in 2021”.
Sentinel mode (Sentinel)
Sentinel is a redis high availability solution that monitors the status of one or more Redis Master services, as well as all the slave services of these Master services. When a master service goes offline, the secondary service of the master is automatically upgraded to the master service to process requests instead of the offline master service. It is generally recommended that Sentinel take an odd number of stations, as elections must be more than half in order to be valid.
Sentinel mode setup
Create the sentinel folder and copy the sentinel.conf file into it. We are going to set up the three sentinels scenario and copy the two Sentinal configuration files
Next, configure the configuration file
Bind 0.0.0.0 protected-mode no port 26379 daemonize yes sentinel monitor myMaster 127.0.0.1 6378 3 // Set the primary node name IP port Sentinel Failover -timeout mymaster 2000 // Sentinel down-after-milliseconds mymaster 2000 // If there is no response within 3 seconds after sentinel heartbeat detection, the system is considered to have failed and another secondary is switched to the primary.Copy the code
Change the port number of the other two according to the above configuration
Then start the command as follows:
./redis-server .. /sentinel/sentinel-26377.conf --sentinelCopy the code
When we force the master node offline, let’s look at the slave node
You can see that after the election, the secondary node of 6378 becomes the master node
6377 the primary node specified by the secondary node becomes 6378
When we restart the original master node
You can see that 6379 automatically becomes a slave node
Sentry mode principle
Way to work
-
Each Sentinel sends a PING command once per second to the Master, Slave, and other sentinel instances that it knows of
-
Sentine marks an instance as subjective offline if it has taken longer than the value specified for own-after-milliseconds.
-
If a Master is marked as subjectively offline, all sentinels that are monitoring the Master confirm that the Master is subjectively offline at a rate of once per second.
-
Typically, each Sentinel sends INFO commands to all known masters and slaves every 10 seconds
-
When a Master is marked as objective offline by Sentinel, sentinel sends info to all salves of the offline Master once every 10 seconds instead of once every 1 second.
-
The master customer offline status will be removed if there are not enough Sentinels to approve the master offline.
Three scheduled Tasks
Sentinel has three scheduled tasks internally
- Every 10 seconds each sentinel will execute info on the master and slave. This task serves two purposes:
Discovering the slave node
Confirm the master/slave relationship
-
Every 2 seconds each Sentinel exchanges information (pub/sub) through a channel of the master node. The master node has a channel that publishes subscriptions (sentinel: Hello). Sentinel nodes use the __sentinel__: Hello channel to exchange information (their “views” and their own information) to reach consensus.
-
Every 1 second each sentinel pings other Sentinel and Redis nodes (mutual monitoring), which is actually a heartbeat detection and is the basis for failure determination.
Subjective offline
The so-called subjective offline (Subjectively Down, short for SDOWN) refers to the offline judgment made by a single Sentinel instance on the server, that is, a single Sentinel considers a service offline (it may be due to the failure to receive subscriptions, network connectivity, etc.). That is, if a server does not respond to a PING from Sentinel within milliseconds of down-after-milliseconds or returns an error, Sentinel marks the server as SDOWN.
Sentinel sends a ping command once per second to all instances (master, slave, other sentinels) with which it has established a command connection, and determines if the instance is online (” subjectively online “for this sentinel) by determining whether the ping reply is valid or invalid.
If an instance returns an invalid response before down-after-milliseconds, sentinel considers the instance offline. Change the flags state to SRI_S_DOWN. If multiple sentinels monitor a service, there may be several sentinels with different down-after-milliseconds configurations. This should be noted in actual production.
Objective offline
Objective offline (ODOWN) refers to multiple Sentinel instances making SDOWN judgments on the same server and passing sentinel
After the is-master-down-by-addr command is used to communicate with each other, the server is offline and failover is enabled.
Customers are not marked objectively offline until a sufficient number of Sentinels have marked a server as subjective offline.
This quantity is given by
Entinel monitor sets the quorum in entinel monitor. The parameter quorum is a basis for the objective offline, which means that at least all the quorum sentinels consider the master to be faulty, and then the master will be offline and failover. Typically, quorum is set to one half of the number of sentinels plus one, for example, two for three sentinels.
Election bellwether
A Redis service is judged to be objective offline, and several sentinels monitoring the service negotiate to elect a lead Sentinel, which will fail over the Redis service. The election of the lead Sentinel follows the following rules:
1) All Sentinels are fairly eligible to be elected as leaders.
2) All sentinels have one and only chance to elect a sentinel as the lead (in one election). Once a Sentinel is elected as the lead, it cannot be changed.
Sentinel is a first-come, first-served service. Once the current sentinel is set to lead, any future request to set sentinel as the lead will be rejected.
4) Each sentinel whose discovery service is objectively offline will require other Sentinels to set themselves as the lead.
5) When a sentinel (source sentinel) sends an IS-master-down-by-addr IP port current_epoch runid command to another sentinel (destination sentinel), The runid parameter is the sentinel runid, which means that the source sentinel requires the target sentinel to elect it as the lead.
6) The source Sentinel will check the response of the target Sentinel to its request to be set as the leader. If the leader_runid and Leader_EPOCH of the reply are the source Sentinel, it means that the target Sentinel agrees to set the source Sentinel as the leader.
7) If a sentinel is set as the lead by more than half of the Sentinels, then the sentinel is the lead.
8) If the lead Sentinel is not elected within the time limit, a provisional period of time will be set for re-election.
Automatic failover mechanism
Sentinel saves all the slave node information of the master node. The lead Sentinel selects the new master node from the service list according to the following rules:
-
Filter out nodes that are subjectively offline
-
Select the node with the highest slave-priority(weight value, which can be set in the configuration file). If there is one, return no further selection
-
Select the node with the highest replication offset, because the larger the replication offset is, the more complete the data is replicated. If there is one, the data will be returned, and if there is no one, the data will continue.
-
Select the node with the smallest run_id.
Use the slaveof no one command to make the selected slave node the master node. It makes other nodes its slaves through the salveof command. When the offline service comes back online, Sentinel sends it the Slaveof command to become the new slave node.
Redis cluster
The cluster structures,
Create a cluster folder and put the redis configuration file in it
Daemonize yes // Enable the daemon thread logfile "6379.log" dbfilename "dump-6379.rdb" protected-mode no Bind IP or access password port 6379 bind 0.0.0.0 pidfile /var/run/redis_6379.pid //pidfile file cluster-enabled yes // Enable the cluster Conf // Cluster configuration configuration file Cluster-node-timeout 15000 // The request times out. The default value is 15 secondsCopy the code
After the configuration is complete, copy the two configuration files and change the port numbers
Start node
./redis-server .. /cluster/redis-6380-master.conf ./redis-server .. /cluster/redis-6381-master.conf ./redis-server .. /conf/redis-6379-master.confCopy the code
Then you need to start the cluster, which depends on Ruby, so install Ruby first
Install ruby
Wget cache.ruby-lang.org/pub/ruby/2….
The tar XVF ruby – 2.3.1. Tar. Gz
./configure –prefix=/usr/local/ruby
make
make install
cd /usr/local/ruby
cp bin/ruby /usr/local/bin
cp bin/gem /usr/local/bin
Wget rubygems.org/downloads/r…
The gem install. / redis – 3.3.0. Gem
gem list –check redis gem
Start the cluster
Redis -cli --cluster create 192.168.121.133:6379 192.168.121.133:6380 192.168.121.133:6381 192.168.121.133:6376 192.168.121.133 192.168.121.133:6377:6378 - cluster - replicas of 1Copy the code
The final cluster-Replicas 1 represents the ratio of master and slave nodes. The ratio algorithm is master and slave nodes, and is 1, indicating that the ratio between master and slave nodes is 1:1. This means that one master node requires one slave node, while three master nodes require three slave nodes, and the cluster must have three master nodes by default. If the cluster does not start successfully, delete all RDB files and conf files starting with node, and restart the cluster again.
After the cluster is successfully started, add -c to the end of the command to enter the client
/redis-cli -p 6380 -c 127.0.0.1:6379> cluster info Queries cluster information 127.0.0.1:6379> cluster nodes How many nodes are in a clusterCopy the code