As a technical practitioner, performance optimization is a required course for everyone
Just as the best way to fix a pretty girl’s computer in college was to “reboot” it, performance optimization has its own tricks
You’ve heard the saying: If performance isn’t enough, cache will do it! Yes, you heard me right. It’s caching.
But, ha, ha, it’s not like I’m just throwing my hands out.
However, Wang received a second kill activity task, designed a technical solution, and threw a large amount of data into the cache, hoping to use the high throughput of Redis to withstand peak pressure.
This idea is also correct. It takes about 30 gigabytes to evaluate the size of the cache.
Do you think it’s a problem?
If you’re okay with that, you can go play games
We know that Redis clusters have master-slave or sentry modes
Regardless of the mode, when synchronizing data from a node, a child process is initially forked to generate an RDB file, which can take a long time if the data volume is too large. If the persistence mechanism is turned on again on the primary node, performance is even less guaranteed.
To solve this problem, Redis has come up with a new solution, which is to fragment big data
Suppose a node originally stored 30GB of data, now we split six instances, each instance data is only 5 GB, the pressure is much less
Note: The routing rules for keys are based on a special load algorithm and are not evenly distributed.
Now, what we’re going to focus on is, how do we design this split?
Private plan
Before Redis 3.0, there is no official cluster solution, but a single instance is limited by memory, can not achieve vertical scaling, what to do?
Some have proposed client-based partitioning schemes.
For example, ShardedJedis based on client partition, Codis and Twemproxy based on proxy, etc., are mounted behind a number of Redis instances, which are completely isolated and do not communicate with each other. A logical cluster is formed through the client proxy. To solve the problem of huge data capacity.
The official plan
Redis officially proposed a Cluster solution in version 3.0, called Redis Cluster.
The Core design of the Redis Cluster introduces a Hash Slot that cuts the Cluster into 16,384 Hash slots, which you can think of as small data partitions.
When we operate Redis, according to the passed key, according to the CRC16 algorithm to calculate a 16-bit binary value, and then modulus 16384, we get a hash slot number.
Note: If you manually assign hash slots, you must allocate all 16384, otherwise the Redis cluster will not work.
How to build a Redis Cluster
It is divided into manual construction and automatic construction
Let’s take a look at how to build a Redis Cluster manually.
First, prepare the machine. For testing purposes, we will use only one machine and simulate 6 instances of Redis through different ports
1. Create three directories: conf, data, and log to store configuration, data, and log files respectively.
Modify the CONF configuration file as follows
/usr/local/redis-cluster/data/redis-6379 /usr/local/redis-cluster/data/redis-6379 /usr/local/redis-cluster/log/redis-6379.log # port number port 6379 # cluster-enabled yes Cluster-config-file "nodes-6379.conf" will be created in the data directory after the cluster is successfully started. Set cluster-node-timeout 10000 to 10 secondsCopy the code
2. Start the node
sudo redis-server conf/redis-6379.conf
3. All nodes in the cluster communicate with each other to form a cluster. Run cluster meet {IP} {port}. After a successful handshake, the status is propagated in the cluster through the Gossip protocol. Other nodes automatically discover new nodes and initiate a handshake. Finally, all nodes sense each other and form a cluster.
127.0.0.1:6379> cluster meet 127.0.0.1 6378 127.0.0.1:6379> cluster meet 127.0.0.1 6377 127.0.0.1:6379> cluster meet 127.0.0.1 6377 127.0.0.1:6379> cluster meet 127.0.0.1 6377 127.0.0.1:6379> cluster meet 127.0.0.1 6376 127.0.0.1:6379> cluster meet 127.0.0.1 6375 127.0.0.1:6379> cluster meet 127.0.0.1 6374Copy the code
4. Allocate hash slots. There are 16384 slots in total, and a certain number of hash slots are allocated to each node instance
redis-cli -p 6379 cluster addslots {0.. 5461} redis-cli -p 6378 cluster addslots {5462.. 10922} redis-cli -p 6377 cluster addslots {10922.. 16383}Copy the code
5. After slots are allocated to the three primary nodes, each primary node mounts a secondary node for emergency failover. The secondary node replicates slot information and service data of the primary node
# enter the redis - from the node client cli - p 6376 127.0.0.1:6376 > cluster replicate 7 d480c106752e0ba4be3efaf6628bd7c8c124013 (6379 examples of the master node ID) :Copy the code
Detailed steps: juejin.cn/post/684490…
6. Run cluster slots to view the slot distribution of each node in the cluster
How does a client know which Redis slice instance a key belongs to
Redis Cluster clusters are sharded. Although each instance has only part of the slot data, the entire slot distribution is synchronized with each other, similar to virus spread.
Finally, each instance has all slot hash slots mapped to the instance.
After the application is started, the client associates with Redis, pulls all slot mappings from one Redis instance, and caches them locally.
When a key operation command is received, the hash slot of the key is calculated first, and then the command is sent to the corresponding Redis instance, thus completing the distributed routing logic.
Of course, some special situations may occur, such as cluster expansion or reduction, which may disturb the original hash slot distribution
What does the client do if the mapping between slot and instance changes?
It doesn’t matter, Redis officials also thought of this problem
The solution is to use a redirection mechanism.
When the client executes a key command and the location of the pointed instance has changed, it responds to the MOVED result with the address of the new target instance.
The client updates the local cache, and subsequent requests for the slot are directed to the new instance.
However, if there are many keys in the slot, what can I do if some keys have not been migrated?
GET Tom哥:key
(error) ASK 6504 127.0.0.1:6379
Copy the code
When a client requests a key, it receives an ASK error message. At this point, the hash slot mapping for the client’s local cache is not updated.
The client sends the ASKING command to the new target instance, and then the original GET command. This command action will be performed on the new instance, but only once.
The next operation on the same key is routed to the original instance until all slots are migrated.
The primary node of Redis Cluster is down. What can I do?
As long as it’s a system, it’s at risk of downtime. Sentinel mode, the sentinel cluster is deployed separately to monitor the entire Redis cluster. The specific operation process has been introduced in the previous article.
The Redis Cluster does not deploy sentinel nodes individually, but monitors each other through heartbeat between master nodes.
In simple terms, for node A, if A node considers that node A is down, then it is subjective offline. If more than half of the nodes in the cluster consider A to be dead, then A is marked as objective offline.
Once node A is marked objectively offline, the cluster begins to failover. The other normal master nodes vote to select one slave node from node A and switch it to the new master to provide external services. A slave is elected when it receives more than half of the master votes.
After being elected, the new master will execute Slaveof no one to stop copying node A and become master. Then it transfers all the slots handled by node A to itself, and sends PONG messages to the cluster to broadcast its latest status.
Note: The read and write requests in the Redis Cluster are performed on the master node, and the slave node is only used for data backup.