Introduction of Redis
Redis is a high performance key-value database that is completely open source and free and complies with the BSD protocol.
Redis and other key-value caching products have the following three characteristics:
Redis supports data persistence, saving data in memory to disk, which can be reloaded for use upon restart. Redis not only supports simple key-value type data, but also provides the storage of list, set, zset, hash and other data structures. Redis supports data backup, namely, data backup in master-slave mode.
Compare Redis with Memcache
Advantages of Memcache: 1, simple key-value storage 2, high memory usage 3, multi-core processing, multi-thread disadvantages: 1, no disaster recovery 2, no persistence Redis advantages: 1, rich data structure 2, persistence 3, master-slave synchronization, failover 4, memory database Redis disadvantages: 1, single thread 2, single core
Redis installation and configuration
Official website download address:redis.io/ Download it and upload it to Linux/home/software
Directory.
Installation steps
1. Decompress
The tar - ZXVF redis - 6.0.9. Tar. GzCopy the code
2. Install GCC compilation environment
yum install gcc-c++
Copy the code
3. Go to the decompressed redis-6.0.9 directory
make && make install
Copy the code
Error: make[1]: *** [server.o] error: make[1]: *** [server.o] error: make[1]: *** [server.o] error: make[1]: *** [server.o] error: make[1]: *** [server.o] Therefore, GCC needs to be upgraded. The upgrade process is as follows:
yum -y install centos-release-scl
yum -y install devtoolset-9-gcc devtoolset-9-gcc-c++ devtoolset-9-binutils
scl enable devtoolset-9 bash
echo "source /opt/rh/devtoolset-9/enable" >> /etc/profile
gcc -v
Copy the code
Make && make install
The configuration steps
1, go to the decompressed folder redis_init_script, under utils, copy redis_init_script to /etc/init.d.
cp redis_init_script /etc/init.d/
Copy the code
2. Create a directory to store the configuration file
mkdir /usr/local/redis
Copy the code
3. Copy the configuration file to this directory
cp redis.conf /usr/local/redis/
Copy the code
4. Modify the redis.conf core configuration file
/usr/local/redis/ workingdir /usr/local/ workingdir/workingdir/workingdir /usr/local/ workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir/workingdir IP # bind 127.0.0.1 bind 0.0.0.0 Be sure to set the requirepass imooc 5, into the/etc/init. D modified redis_init_script file folder Here is modified to/usr/local/redis/redis. ConfCopy the code
Add execute permission to redis startup script Then run to start the redis chmod 777 redis_init_script. / ps redis_init_script start 7, check the redis process - ef | grep redis since the launch of 8, set up the boot the chkconfig redis_init_script onCopy the code
To disable the redis mode, edit the redis_init_script file and add -a + password
After performing./redis_init_script stop
Can be
Redis command line client
- Redis -cli: Access the Redis client
- Auth PWD: Enter the password
- Set key value: Sets the cache
- Get key: obtains the cache
- Del key: deletes the cache
- Redis -cli -a password ping: Check whether the server is alive
Redis data type
The key related
Keys * : displays all keys (not recommended for production because performance may be affected) Type Key: displays the key type
String String
String: The simplest type of String key-value pair cache, and the most basic. Related commands:
Setnx rekey data: Sets the existing key, but does not overwrite the existing key. Set key value ex time: TTL: view the remaining time. -1 will never expire, -2 will expire appEnd Key Value: merge string strlen key: string length incr key: add 1 decr key: Decrby key num: decrby key Num: decrby key Num: getrange key start end: intercepts data. End =-1 indicates the end. Setrange key start newdata: Replace the data from the start position. Mset: the value is set continuously. Mget: the value is set continuously. Flushdb flushdb specifies the current database, flushall specifies all databases.Copy the code
Hash data type
Hash: similar to map, stores structured data structures, such as an object-related command:
hset key property value: Create a User object that contains the name attribute, Name value: imooc hset User Name Lucy Obtains the value of name in the user object. Hget user name hmset: Sets multiple key value pairs in the object. Hmset User age 18 phone 139123123 Hmget user age phone hgetall: obtain all attributes of the object hgetall user hkeys: Hkeys user hvals: all attribute values in the object hkeys user hvals: all attribute values in the object hkeys User hvals: All attribute values in the object hkeys User Hvals: All attribute values in the object hkeys user Hvals: All attribute values in the object hkeys user Hvals: All attribute values in the object hkeys user Hvals: All attribute values in the object hkeys user Hvals: All attribute values in the object hkeys user Hvals: All attribute values in the object hkeys user Hdel user + property: Deletes an attribute of an objectCopy the code
List data type
List: {a,b,c,d… } related commands:
Rpush userList 1, 2, 3, 4, 5: Build a list and store data from the left. Lpop: pull data from left. Rpop: pull data from right. Llen list: Lindex list index: get the index of a list lset list index value: replace the index of a list with the value of linsert list before/after pivot value: Before a value pivot, insert a new value value lrem list count Value: delete count of the same value ltrim list start end: truncate value, replace the original list del list: delete listCopy the code
Set data type
Set, which can be used to remove related commands:
Sadd set + value: adds data to a set. Smembers set: adds data to a set. Sismember set pig srem set pig spop set + number srandmember set + number Sdiff set1 set2: takes elements contained in set1 but not contained in set2 and sets the difference sinter set1 set2: takes elements contained in set1 and set2 and sets the intersection sunion set1 set2: most union for set1 and set2Copy the code
Zset data type
Zset set, which can be implemented to delete + sort related commands:
Zrange zset 0-1 withscores zrank zset zrange zset 0-1 withscores zrank zset zrange zset 0-1 withscores zrank zset Zcard zset: count zset score 1 score 2: count the number of scores between score 1 and score 2 zrangebyScore zset score 1 score 2: Zrangebyscore zset select member(score 1, score 2) from zrangebyScore zset Select * from memeber where score (1, 2) zrangebyScore (1, 2) zset score (1, 2) limit start end: end start+1 zrem zset duck: what except duckCopy the code
For more information, see redisdoc.com/
SpringBoot integration redis combat
Import dependency authoring configuration files
<! -- Import redis dependencies -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Copy the code
spring:
redis:
database: 1
host: 1219. *. * * *. 81
port: 6379
password: * * * *
Copy the code
Write controller to test
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import springfox.documentation.annotations.ApiIgnore;
@RestController
@ApiIgnore
@RequestMapping("redis")
public class RedisController {
@Autowired
private RedisTemplate redisTemplate;
@GetMapping("/set")
public Object set(String key,String value){
redisTemplate.opsForValue().set(key,value);
return "OK";
}
@GetMapping("/get")
public Object get(String key){
String o = (String) redisTemplate.opsForValue().get(key);
return o;
}
@GetMapping("/del")
public Object del(String key){
redisTemplate.delete(key);
return "OK"; }}Copy the code
Browser test access
http://localhost:8088/redis/set?key=name&value=zhangsan
Copy the code
Log in to Linux to see the results
Custom Redis utility classes
import org.springframework.beans.factory.annotation.Autowired; import org.springframework.dao.DataAccessException; import org.springframework.data.redis.connection.RedisClusterConnection; import org.springframework.data.redis.connection.RedisConnection; import org.springframework.data.redis.connection.StringRedisConnection; import org.springframework.data.redis.core.RedisCallback; import org.springframework.data.redis.core.RedisOperations; import org.springframework.data.redis.core.SessionCallback; import org.springframework.data.redis.core.StringRedisTemplate; import org.springframework.stereotype.Component; import java.util.List; import java.util.Map; import java.util.Set; import java.util.concurrent.TimeUnit; /** * @Title: / @component public class RedisOperator {// @autowired // private RedisTemplate<String, Object> redisTemplate; @Autowired private StringRedisTemplate redisTemplate; // Key (Key), a simple key-value operation /** * implementation command: TTL Key, in seconds, returns the TTL (time to live) of a given Key. * * @param key * @return */ public long ttl(String key) { return redisTemplate.getExpire(key); } /** * Expire sets the expiration time, Public void expire(String key, long timeout) {redistemplate. expire(key, timeout, redistemplate. expire(key, timeout, redistemplate. expire(key, timeout, redistemplate. expire(key, timeout, redistemplate. expire) TimeUnit.SECONDS); } /** * INCR key, * * @param key * @return */ public long incr(String key, long delta) { return redisTemplate.opsForValue().increment(key, delta); } /** * Public Set<String> KEYS (String pattern) {return redistemplate. KEYS (pattern); Public void DEL (String key) {redistemplate.delete (key); } // String (String) /** * SET the key value, * * @param key * @param value */ public void set(String key, String value) { redisTemplate.opsForValue().set(key, value); } /** * SET key value EX seconds, * * @param key * @param value * @param timeout * (in seconds) */ public void set(String key, String value, long timeout) { redisTemplate.opsForValue().set(key, value, timeout, TimeUnit.SECONDS); } /** * GET key, return the string value associated with the key. * * @param key * @return value */ public String get(String key) { return (String)redisTemplate.opsForValue().get(key); } /** * Mget @param keys @return public List<String> mget(List<String> keys) {return redisTemplate.opsForValue().multiGet(keys); } /** * Public List<Object> batchGet(List<String> keys) {// nginx -> keepalive // redis -> pipeline List<Object> result = redisTemplate.executePipelined(new RedisCallback<String>() { @Override public String doInRedis(RedisConnection connection) throws DataAccessException { StringRedisConnection src = (StringRedisConnection)connection; for (String k : keys) { src.get(k); } return null; }}); return result; } // Hash (Hash) /** * HSET Key field value, @param key @param field @param value public void hset(String key, String field, Object value) { redisTemplate.opsForHash().put(key, field, value); } /** * HGET key field, @param key @param field @return */ public String hget(String key, String field) { return (String) redisTemplate.opsForHash().get(key, field); } /** * HDEL key field [field... Delete one or more specified fields in the hash table key. Non-existing fields are ignored. * * @param key * @param fields */ public void hdel(String key, Object... fields) { redisTemplate.opsForHash().delete(key, fields); } /** * HGETALL key, return hash table key, all fields and values. * * @param key * @return */ public Map<Object, Object> hgetall(String key) { return redisTemplate.opsForHash().entries(key); } // List () /** * LPUSH key value, insert a value into the list key header * * @param key * @param value * @return The length of the list after executing LPUSH. */ public long lpush(String key, String value) { return redisTemplate.opsForList().leftPush(key, value); } /** * LPOP key removes and returns the header element of the list key. * * @param key * @return The header element of the list key. */ public String lpop(String key) { return (String)redisTemplate.opsForList().leftPop(key); } /** * RPUSH key value inserts a value at the end of the list key (rightmost). * * @param key * @param value * @return Specifies the length of the list after executing LPUSH. */ public long rpush(String key, String value) { return redisTemplate.opsForList().rightPush(key, value); }}Copy the code
Redis persistence
- RDB: snapshot mode
Redis is the snapshot persistence mode of RDB by default. In the snapshot mode, data in memory is directly saved to a dump file, which is periodically saved and policies are saved. When Redis needs to persist, it forks a child process that writes data to a temporary RDB file on disk. When the child process finishes writing the temporary file, it replaces the original RDB, which has the advantage of copy-on-write. RDB recovers data faster than AOF. Pros: This file is great for backups: for example, you can back up your RDB files every hour for the last 24 hours and every day of the month. This way, if you run into problems, you can always revert the dataset to a different version. RDB is well suited for Disaster recovery. Disadvantages: Performance: RDB mode cannot achieve real-time, second-level persistence. Because bgSave forks the child thread every time it is run, it is a heavyweight operation that can be costly to operate frequently. Data integrity: If you need to minimize data loss in the event of a server failure, the RDB is not for you. Although Redis allows you to set different save points to control how often RDB files are saved, it is not an easy operation because RDB files need to hold the state of the entire data set. So you’ll probably save your RDB file at least once every 5 minutes. In this case, you could lose several minutes of data in the event of a malfunctioning outage.
If more than one key is changed within 900 seconds, a snapshot is initiated and saved. If more than 10 keys are modified within 300 seconds, a snapshot is initiated and saved. If 10,000 keys are changed within one minute, snapshots are initiated for saving. Save 900 1 save 300 10 save 60 10000 Stop writing stop-writes on-bgsave-error yes # Enable file compression option rdbcompression yes # RDB filename dbfilename dump. RDB # RDB file directory dir /usr/local/redis/workingCopy the code
- AOF: indicates the log mode
AOF: Store all Redis server modification commands in a file, a collection of commands.
With AOF for persistence, each write command is appended to appendone.aof via the write function
Advantages: Using AOF persistence makes Redis much more durable: You can set different fsync policies, such as no fsync, fsync every second, or fsync every time a write command is executed. The default AOF policy is fsync once per second. In this configuration, Redis still performs well and loses at most a second of data in the event of an outage (fsync is performed in background threads, so the main thread can continue to struggle to process command requests).
Disadvantages: AOF files are usually larger than RDB files for the same data set. Depending on the fsync strategy used, AOF may be slower than RDB. Fsync performance per second is still very high under normal conditions. However, RDB can provide more guaranteed maximum latency when handling large write loads.
Appendonly no # AOF Log name appendfilename "appendonly. AOF "# backup mode appendfsync always appendfsync everysec Appendfsync-on-rewrite no # after the current file size exceeds double the previous file size Auto-aof -rewrite-percentage 100 auto-aof-rewrite-min-size 64mbCopy the code
The AOF file still exists in the dir directory specified in the configuration file.
Redis master-slave architecture
A simple introduction
A single REDis can carry QPS ranging from tens of thousands to tens of thousands. For caches, this is generally used to support high read concurrency. Therefore, the architecture is made into a master-slave architecture, with one master and many slaves. The master is responsible for writing and copying data to other slave nodes, and the slave nodes are responsible for reading. All read requests go from the node. In this way, it is also easy to achieve horizontal expansion and support high read concurrency.Redis Replication -> Master/slave Architecture -> Read/write Separation -> Horizontal expansion supports high read concurrency
Core principles
When a slave node is started, it sends a PSYNC command to the master node.
If this is the first time that the slave node connects to the master node, a full resynchronization full replication is triggered. At this point, the master starts a background thread to generate a snapshot file of the RDB and cache all the new write commands received from the client. After the RDB file is generated, the master sends the RDB file to the slave. The slave first writes the RDB file to the local disk and then loads the RDB file from the local disk to the memory. The master then sends the write commands cached in the memory to the slave and the slave synchronizes the data. If the slave node is disconnected from the master node due to a network fault, the slave node automatically reconnects to the slave node. After the connection, the master node copies only the missing data to the slave node.Note: If you use a master/slave architecture, it is recommended to enable persistence for master nodes. It is not recommended to use slave nodes as data hot standby for master nodes, because then if you turn off persistence for master nodes, The data may be empty when the master is down and restarted, and may be lost as soon as the slave node is replicated.
Actual master and slave architecture building
Check the status of the current node with Info ReplicationCopy the code
Here we set up a master and a slave for convenience. There is no need to configure the master node. The following configuration is mainly for the slave node, and two cloud servers are used here. In the node’s redis.conf file, first modify the following: replicaof + master IP + master Port
Second, we need to change the password of master to:masterauth+password
Check the configuration file to ensure that the secondary node is read-only and not written.
Finally, don’t forget to restart the slave node. /etc/init.d/redis_init_script stop , /etc/init.d/redis_init_script start
After the restart, you can see that the data on the primary node has been synchronized to the secondary node, and you can use Info Replication to view further information.
From the node:
The master node:
In this case, if the master is down and the slave remains slave, useinfo replication
If the master is in down state, restart the master.
Redis diskless replication
If the disk is a mechanical disk, the I/O efficiency of the disk is low, you can use diskless replication. The master creates the RDB in memory and sends it to the slave instead of landing on its own disk. Simply enable repl-diskless-sync yes in the configuration file.
Repl-diskless-sync yes # Wait 5s before starting the replication, because more slaves need to reconnect to repl-diskless-sync-delay 5Copy the code
Redis cache expiration policy
Periodically delete: Detects 10 times per second (default). The configuration is as follows, which is cpu-unfriendly.
Lazy to delete: CPU friendly, memory unfriendly.
Redis memory obsolescence management mechanism
When MEMORY is full, you can use hard disk, but there is no point, because hard disk is not as fast as MEMORY, which affects Redis performance. Therefore, when MEMORY is full, Redis provides a set of elimination mechanism: MEMORY MANAGEMENT.
maxmemory: When the memory usage reaches, the cache is cleared.
- Noeviction: Old cache will never expire, new cache cannot be set up, error returned
- Allkeys-lru: Clear the old cache that is least used, and save the new cache (recommended)
- Allkeys-random: Randomly delete from all caches (not recommended)
- Volatile – lRU: In those caches with expire expiration times set, the old cache that is least used is cleared and the new cache is saved
- Volatile -random: Randomly drop caches in those caches that have expire expiration times set
- Volatile – TTL: Delete expiring caches from those caches that have expire times set.
Redis sentinel mechanism
In the master-slave architecture, there was a problem with what to do if the master went down, and the sentry mechanism was designed to solve this problem.
Sentry is a Redis high availability solution that monitors the status of one or more Redis master services, as well as all the slave services of these Master services. When a master service is down, sentry can monitor the status of one or more Redis master services. Some slave service under this master will be upgraded to master to continue working in place of the down master.
configuration
Go to the redis decompressed directory and copy sentinel.conf to /usr/local/redis.
cp sentinel.conf /usr/local/redis/
Copy the code
The main configurations are as follows:
- Common configuration: Daemonize yes pidfile /var/run/redis-sentinel.pid logfile / usr/local/redis/sentinel/redis - sentinel. Log dir/usr/local/redis core configuration/sentinel / - # 2 here refers to two sentries, two sentry found the master node downtime is down Sentinel monitor myMaster 121.**.195.81 6379 2 # Password Sentinel Auth-pass mymaster **** # master The interval between which sentinel monitor myMaster is determined to fail Sentinel DOWN-after-milliseconds mymaster 30000 # Sentinel PARALLEL Slaves The timeout period for the primary/secondary switchover, during which the sentinel needs to perform failover, is also a process. If the sentinel does not perform failover, other sentinels will perform failover after the timeout period. sentinel failover-timeout mymaster 180000Copy the code
Note: Each Redis node needs this configuration file.
Start sentinel:
redis-sentinel sentinel.conf
Copy the code
test
For convenience, two cloud servers are still taken as an example to build a master-slave architecture with sentry mechanism. Note That the internal networks of the two servers do not communicate with each other. When the master is down, you can run the info replication command to check whether the slave cannot be changed to master. Run the tail -f command to dynamically check the sentinel.log file. The following error message is displayed: fail-abort – not-elected.
The cause is that the two cloud servers cannot communicate with each other on the Intranet. Example Modify the sentinel.conf configuration. In the sentinel.conf file, we define sentinel announce-ip as “IP of your server” to specify the IP addresses detected by sentinel. Just use the IP address of the current server that you define, and you can communicate successfully.
Note:
- In practice, sentinels must have at least three or an odd number of nodes to be meaningful
- Sentinels are distributed on different computer nodes
- A group of sentries listens to only one group of master and slave
SpringBoot integrates with Redis Sentry
As with the previous SpringBoot redis deployment, the configuration file is modified as follows:
redis:
database: 1
password: pass
sentinel:
master: mymaster
nodes: 121.89. * * *. 81:26379121. *. * *. 241:26379
Copy the code
The test method and the previous SpringBoot integration redis actual combat is the same, the test was successful.
Redis Cluster (Cluster)
Set up a redis-cluster with three active and three standby nodes.
Redis. Conf configuration
Cluster-enabled yes # One configuration file is required for each node. Six configuration files are required. Each node in the cluster role needs to tell all other nodes, know each other, this file is used in storage cluster mode cluster status information, this file is maintained by Redis itself, we do not need to pay attention to. Cluster-config-file nodes-201.conf # Specifies the timeout period. Cluster-node-timeout 5000 # enable AOF appendonly yesCopy the code
Start six instances of Redis
Appendone. aof and dump. RDB are removed; otherwise an error will be reported during cluster construction. After the above operations are complete, restart Redis.
Create the cluster
Here, the ratio of master node to slave node is 1, 1-3 master node, 4-6 slave node, 1 and 4, 2 and 5,3 and 6 correspond to the master and slave relationship respectively, which is also the most classical cluster mode
redis-cli --cluster create ip1:port1 ip2:port2 ip3:port3 ip4:port4 ip5:port5 ip6:port6 --cluster-replicas 1
Copy the code
If a message is displayed indicating that you do not have permission, add a password.
redis-cli -a password --cluster create ip1:port1 ip2:port2 ip3:port3 ip4:port4 ip5:port5 ip6:port6 --cluster-replicas 1
Copy the code
Checking Cluster Information
Redis -cli -a password --cluster check 192.168.25.64:6379Copy the code
Slot Concept of a slot node
A total of 16,384 operation slots are allocated equally to each master.Data is stored in corresponding slots.
The Redis client connects to the cluster
-c indicates a cluster
redis-cli -c -a password -h ip -p port
Copy the code
You can view cluster information using Cluster Info or Cluster Nodes.
SpringBoot integrates with the Redis cluster
Redis: password: pass cluster: Nodes: 121.89. * * *. 81:6379121. *. * *. 241:6379121.89. * * *. 81:6379121. *. * *. 241:6379121.89. * * *. 81:6379121. *. * *. 241:6379 # 6 different nodesCopy the code