Redis cluster
stand-alone
Single point of failure, bottleneck; Load of multiple nodes;
The cluster
A master-slave replication
define
Replication Mirror: Add, delete, or modify < Primary degraded to Single node > to query the load to the secondary node
Implement highly available Sentinel
1. A Redis service can have multiple replicas of the service, which is called Master and other slaves
2. Data synchronization between Master and Slaves will be maintained as long as the network connection is normal
3. Only Master can perform write commands and Slaves can perform read commands
Perform read commands sent by the client from the server, such as GET, LRANGE, SMEMMBERS, HGET, ZRANGE, etc. The client can connect Slaves to perform read requests and reduce the read pressure of the Master
How do I create a master slave replication
Redis-server –slaveof: configure the current service as the Slave of a redis service
Redis-server --port 6380 --slaveof 127.0.0.1 6379Copy the code
The SLAVEOF host port command changes the status of the current server from Master to Slave of another server
Redis > SLAVEOF 192.168.1.1 6379 to convert the server to Slave Redis > SLAVEOF NO ONE to restore the server to Master without discarding synchronized dataCopy the code
Configuration mode: Upon startup, the server reads the configuration file and automatically becomes the secondary server of the specified server
Slaveof <masterip> <masterport> slaveof 127.0.0.1 6379Copy the code
The master/secondary replication problem is solved manually
1. A Master can have multiple Slaves
Slaves offline, just read request processing performance deteriorates
3. The Master is offline, and the write request cannot be executed
4. A Slave uses SlaveOF no one command to be called Master and other Slaves execute SlaveOF command to point to the new Master and synchronize data from it
Sentinel
SentinelEvery Sentinel can monitor any number of masters and Slaves under the MasterWhen the primary server goes offline
When a Sentinel thinks a monitored server is offline, it checks with other Sentinels in the network to determine if the server is actually offline. If the offline server is the primary server, the Sentinel network will automatically failover the offline primary server. Bring the system back online by promoting one of the slave servers from the offline master to the new master and having it become a replica master
Sentinel profile
Sentinel monitor<name>< IP ><port><quorum> contains at least one monitoring configuration option to specify information about the monitored Master Sentinel monitor<name>< IP ><port><quorum> For example, sentinel Monitor myMaster 127.0.0.1 6379 2 Monitors myMaster's primary server, server IP and port, and requires at least 2 Sentinels to identify the primary server as offline failure. Failover is performed only if the majority of Sentinels agree. Sentinel automatically discovers Slaves of Master according to the Master configuration. The default port number of Sentinel is 26379Copy the code
Sentinel summary
1 Master/Slave replication solves read request sharing, and Slave nodes go offline, reducing read request capability. 2 Single point of write request problem 2 Sentinel automatically performs a Failover to upgrade a Slave to Master after the Master goes offline. Master slave replication + Sentinel Sentinel only solves the problem of read performance and high availability, but not the problem of write performanceCopy the code
Redis Twemproxy
1. The primary and secondary nodes do not share the write pressure. Multiple nodes are used to share the write requests and distribute the write requests to different nodes
2. Use multiple nodes to distribute write requests to different nodes
3. The idea of Sharding multi-node sharing is similar to the idea of relational database processing large table horizontal partitioning
Twemproxy
The proxy server developed by Twitter, which is compatible with Redis and Memcached, allows users to add multiple Redis servers to a server pool and use the hash and distribution functions selected by the user. By using TwemProxy, we can shard the database to multiple Redis servers, and use these servers to share the system pressure and database capacity: Under the same server hardware conditions, for a pool of N Redis servers, adding more servers to the pool with an average of 1/N client command requests can linearly expand the system’s ability to handle command requests and the amount of data the system can store
Configuration scheme
Twemproxy configuration Redischi: Listen: 192.168.56.2010:22121 Hash: fnv1a_64 Distribution: ketama auto_EJect_hosts: true redis: true server_retry_timeout: 2000 server_failure_limit: 3 servers: - 192.168.56.201:6379:1-192.168.56.202:6379:1-192.168.56.203:6379:1Copy the code
Configuration instructions
Listen: 192.168.56.201:22121, the listening address and port number of this server pool hash: Distribution: ketama, the key distribution algorithm that determines which server the key is distributed to True, the proxy redis command requests servers, the server address and port number and weight of each server in the pool. Auto_eject_hosts and server_failure_limit If twemProxy fails to send command requests to the same server for three consecutive times, twemProxy marks the server offline and sends it to other online servers in the pool for processingCopy the code
The last
I here organized a: Redis basic knowledge summary document (there are also Java core knowledge, interview topics and 20 years of the latest Internet real questions, e-books, etc.) have the need of friends can pay attention to the public number can be obtained.