I am 21 years old junior college and have only been working in Java backend for 2 weeks. Write bad please give me more advice! The following introduction is more confident or apply +config to explain it, you can look at it in detail.

The introduction

I have roughly sorted out and mastered some preliminary conceptual knowledge of Redis. This chapter basically concludes most knowledge introduction of Redis. Take a look at the Redis Primer. One is mainly theoretical. This one is a little bit deeper. More practical.

Next, expect some Redis simple class articles such as SDS dynamic strings.

background

The background of Redis iteration is from ① single machine era ②Mecaded cache +MySQL+ vertical split ③ sub-database sub-table + horizontal split +MySQL cluster ④ Multi-user access era ⑤ Sharp increase of user number the cache layer protects the database ⑥ Explosive growth of user number a single machine cannot meet user requirements ⑦Redis Cluster (Sentinel mode)

NoSQL

Before learning about Redis, it is necessary to know about NoSQL, relational database, non-relational database. NoSQL is special in that it contains both relational and non-relational types so it’s really important to know. Advantages: Data before no relationship, very good expansion. High-performance advantages in the case of large data volumes. The amount of data is diverse, do not need to design in advance, with the use of actual application: actual application is certainly not possible to use only a database, generally MySQL+NoSQL with the use. Categories: KV key-value pair, document database, column storage, graph relational database

Five common data types

Verifying service connections

[root@iZ2zebzs5dnocz7mu3p6wnZ bin]# redis-cli -p 6379
127.0.0.1:6379> ping
PONG
Copy the code

String

Application: attention number, page views, clicks and other data temporary storage operation

The set get command operates data access

127.0.0.1:6379> get demo
(nil)
127.0.0.1:6379> set demo huanshao
OK
127.0.0.1:6379> get demo
"huanshao"
127.0.0.1:6379> set demo1 nb
OK
127.0.0.1:6379> keys*
(error) ERR unknown command `keys*`, with args beginning with: 
127.0.0.1:6379> keys *
1) "\xac\xed\x00\x05t\x00\x04Demo"
2) "demo"
3) "demo1"
Copy the code

Check whether the key exists

127.0.0.1:6379> exists demo
(integer) 1
127.0.0.1:6379> exists demo2
(integer) 0
127.0.0.1:6379> type demo
string
Copy the code

Set the obtain key expiration time

127.0.0.1:6379> setex demo2 30 wc
OK
127.0.0.1:6379> ttl demo2
(integer) 25
127.0.0.1:6379> ttl demo2
(integer) 21
127.0.0.1:6379> keys *
1) "\xac\xed\x00\x05t\x00\x04Demo"
2) "demo"
3) "demo1"
127.0.0.1:6379> get demo1
"nb"
Copy the code

Key-value append is similar to Append in Java

127.0.0.1:6379> append demo1 lth
(integer) 5
127.0.0.1:6379> get demo1
"nblth"
Copy the code

Value the value to replace

127.0.0.1:6379> get demo1
"nblth"
123.56.240.222:6379> setrange demo1 2 S
(integer) 5
123.56.240.222:6379> get demo1
"nbSth"
Copy the code

Check if it exists and create it if it does not

123.56.240.222:6379> keys *
1) "\xac\xed\x00\x05t\x00\x04Demo"
2) "demo"
3) "demo1"
123.56.240.222:6379> setnx demo2 LTH
(integer) 1
123.56.240.222:6379> keys *
1) "\xac\xed\x00\x05t\x00\x04Demo"
2) "demo"
3) "demo1"
4) "demo2"
123.56.240.222:6379> get demo2
"LTH"
Copy the code

Batch Operation Insert Data Obtain data in batches

123.56.240.222:6379> mset hs1 5 hs2 6
OK
123.56.240.222:6379> keys *
1) "hs2"
2) "\xac\xed\x00\x05t\x00\x04Demo"
3) "demo"
4) "demo2"
5) "hs1"
6) "demo1"
123.56.240.222:6379> mget hs1 hs2
1) "5"
2) "6"
Copy the code

Numerical calculation

123.56.240.222:6379> incr hs1
(integer) 6
123.56.240.222:6379> get hs1
"6"
123.56.240.222:6379> decr hs2
(integer) 5
123.56.240.222:6379> get hs2
"5"
123.56.240.222:6379> get hs1
"6"
123.56.240.222:6379> get hs2
"5"
123.56.240.222:6379> incrby hs1 4
(integer) 10
123.56.240.222:6379> get hs1
"10"
(0.81s)
123.56.240.222:6379> decrby hs2 3
(integer) 2
123.56.240.222:6379> get hs2
"2"
123.56.240.222:6379> keys *
1) "hs2"
2) "\xac\xed\x00\x05t\x00\x04Demo"
3) "demo"
4) "demo2"
5) "hs1"
6) "demo1"
Copy the code

Empty data

123.56.240.222:6379> flushdb
OK
123.56.240.222:6379> keys *
(empty array)
Copy the code

Guaranteeing atomicity either succeeds or fails

123.56.240.222:6379> msetnx k1 v1 k2 v2
(integer) 1
123.56.240.222:6379> keys *
1) "k1"
2) "k2"
123.56.240.222:6379> flushdb
OK
123.56.240.222:6379> keys *
(empty array)
Copy the code

If no value exists, return nill and create

123.56.240.222:6379> getset k3 v3
(nil)
123.56.240.222:6379> get k3
"v3"
Copy the code

Switch the database to obtain the database size

123.56.240.222:6379> dbsize
(integer) 1
123.56.240.222:6379> select 2
OK
123.56.240.222:6379[2]> keys *
(empty array)
123.56.240.222:6379[2] >select 1
OK
123.56.240.222:6379[1] >select 1
OK
123.56.240.222:6379[1]> keys *
(empty array)
123.56.240.222:6379[1] >select 0
OK
123.56.240.222:6379> keys *
1) "k3"
Copy the code

List

Application: Attention list, hot data list, analysis data list and other data into the list collection to obtain the data in the list collection

123.56.240.222:6379> keys *
1) "k3"
123.56.240.222:6379> flushdb
OK
123.56.240.222:6379> keys *
(empty array)
123.56.240.222:6379> lpush mylist lth
(integer) 1
123.56.240.222:6379> lpush mylist jia
(integer) 2
123.56.240.222:6379> lpush mylist you
(integer) 3
123.56.240.222:6379> lrange mylist 0- 1
(error) ERR wrong number of arguments for 'lrange' command
123.56.240.222:6379> lrange mylist 0 - 1
1) "you"
2) "jia"
3) "lth"
123.56.240.222:6379> lrange mylist 0 1
1) "you"
2) "jia"
123.56.240.222:6379> rpush mylist xiaofang
(integer) 4
123.56.240.222:6379> lrange mylist 0 1
1) "you"
2) "jia"
123.56.240.222:6379> lrange mylist 0 - 1
1) "you"
2) "jia"
3) "lth"
4) "xiaofang"
Copy the code

Get the data length based on the subscript

123.56.240.222:6379> lindex mylist 1
"lth"
123.56.240.222:6379> llen mylist
(integer) 2
123.56.240.222:6379> lrange mylist 0 - 1
1) "jia"
2) "lth"
Copy the code

Remove list data

123.56.240.222:6379> lpush mylist xiaofang
(integer) 3
123.56.240.222:6379> lpush mylist mao rpop lth
(integer) 6
123.56.240.222:6379> lrange mylist 0 - 1
1) "lth"
2) "rpop"
3) "mao"
4) "xiaofang"
5) "jia"
6) "lth"
123.56.240.222:6379> lpush mylist sql
(integer) 7
123.56.240.222:6379> lrange mylist 0 - 1
1) "sql"
2) "lth"
3) "rpop"
4) "mao"
5) "xiaofang"
6) "jia"
7) "lth"
123.56.240.222:6379> rpop mylist 1
1) "lth"
123.56.240.222:6379> flushdb
OK
123.56.240.222:6379> lrange mylist 0 - 1
(empty array)
Copy the code

Simulate stacks and queues

Lpush RPOP queue Lpush LPOP stackCopy the code

Set

Application: Unordered, does not support repeated storage. Mainly through the operation of cross and difference set to achieve some similar to weibo common friends, common fans, random events, lottery, name selection and other functions

Price difference set data

123.56.240.222:6379> sadd myset lth
(integer) 1
123.56.240.222:6379> sadd myset lth
(integer) 0
123.56.240.222:6379> sadd myset hs
(integer) 1
Copy the code

Get set data and determine whether the set has this key

123.56.240.222:6379> smembers myset
1) "hs"
2) "lth"
123.56.240.222:6379> sismember myset hs
(integer) 1
123.56.240.222:6379> sismember myset hss
(integer) 0
Copy the code

Get set size

123.56.240.222:6379> smembers myset
1) "hs"
2) "xiaofang"
3) "lth"
123.56.240.222:6379> scard myset
(integer) 3
Copy the code

Set removes the element

123.56.240.222:6379> srem myset lth
(integer) 1
123.56.240.222:6379> smembers myset
1) "hs"
2) "xiaofang"
Copy the code

Pick a set element at random

123.56.240.222:6379> smembers myset
1) "hs"
2) "xiaofang"
123.56.240.222:6379> srandmember myset
"xiaofang"
123.56.240.222:6379> srandmember myset
"xiaofang"
123.56.240.222:6379> srandmember myset
"xiaofang"
123.56.240.222:6379> srandmember myset
"hs"
Copy the code

Delete an element at random

123.56.240.222:6379> spop myset 1
1) "hs"
123.56.240.222:6379> smembers myset
"xiaofang"
Copy the code

Create a new set and take the intersection of sets

123.56.240.222:6379> keys *
1) "youset"
2) "myset"
(0.82s)
123.56.240.222:6379> smembers youset
1) "xiaohuan"
123.56.240.222:6379> smembers myset
1) "xiaofang"
123.56.240.222:6379> sadd youset xiaofang
(integer) 1
123.56.240.222:6379> smembers youset
1) "xiaofang"
2) "xiaohuan"
123.56.240.222:6379> smembers myset
1) "xiaofang"
123.56.240.222:6379> sdiff youset myset
1) "xiaohuan"
Copy the code

Take the set difference

123.56.240.222:6379> smembers youset
1) "xiaofang"
2) "xiaohuan"
123.56.240.222:6379> smembers myset
1) "xiaofang"
123.56.240.222:6379> sinter myset youset
1) "xiaofang"
Copy the code

Take the union of sets

123.56.240.222:6379> sunion youset myset
1) "xiaohuan"
2) "xiaofang"
Copy the code

Hash

Application: because of its mapping characteristics, especially suitable for storing some storage objects. Such as user details, product details, order details and so on.

Set get Hash data

123.56.240.222:6379> hset myhash name lth
(integer) 1
123.56.240.222:6379> hset myhash age 18
(integer) 1
123.56.240.222:6379> hget myhash name
"lth"
123.56.240.222:6379> hget myhash age
"18"
Copy the code

Get all values

123.56.240.222:6379> hgetall myhash
1) "name"
2) "lth"
3) "age"
4) "18"
Copy the code

Obtain multiple filed files in batches

123.56.240.222:6379> hmget myhash name age
1) "lth"
2) "18"
Copy the code

Delete the name field of the current MyHash

123.56.240.222:6379> hdel myhash name
(integer) 1
123.56.240.222:6379> hgetall myhash
1) "age"
2) "18"
Copy the code

Get hash length

123.56.240.222:6379> hlen myhash
(integer) 1
123.56.240.222:6379> hset myhash name lll
(integer) 1
123.56.240.222:6379> hlen myhash
(integer) 2
Copy the code

Check whether the current myHash key and the name field in the myHash exist

123.56.240.222:6379> hexists myhash name
(integer) 1
123.56.240.222:6379> hexists myhash names
(integer) 0
Copy the code

Get all keys and get all values

123.56.240.222:6379> hkeys myhash
1) "age"
2) "name"
123.56.240.222:6379> hvals myhash
1) "18"
2) "lll"
Copy the code

Add 3 to the age field of the myhash key

123.56.240.222:6379> hincrby myhash age 3
(integer) 21
Copy the code

Determine if the wight field exists and create it if it does not

123.56.240.222:6379> hsetnx myhash wight 160
(integer) 1
123.56.240.222:6379> hkeys myhash
1) "age"
2) "name"
3) "wight"
Copy the code

ZSet

Application: It is mainly used for real-time ranking information in some live broadcasting systems. Including online user list, gift ranking list, bullet-screen message list and other functions.

Add data and iterate over data

127.0.0.1:6379> zadd myzset 1 lth
(integer) 1
127.0.0.1:6379> zadd myzset 2 xf
(integer) 1
127.0.0.1:6379> zrange myzset
(error) ERR wrong number of arguments for 'zrange' command
127.0.0.1:6379> zrange myzset 0 - 1
1) "lth"
2) "xf"
Copy the code

Batch Adding Data

127.0.0.1:6379> zadd myzset 3 zsq 4 zyn
(integer) 2
127.0.0.1:6379> zrange myzset 1 - 1
1) "xf"
2) "zsq"
3) "zyn"
127.0.0.1:6379> zrange myzset 0 - 1
1) "lth"
2) "xf"
3) "zsq"
4) "zyn"
Copy the code

Query interval range values in the zset collection

127.0.0.1:6379> zrange myzset 0 - 1
1) "lth"
2) "xf"
3) "zsq"
4) "zyn"
127.0.0.1:6379> zrangebyscore myzset -inf +inf
1) "lth"
2) "xf"
3) "zsq"
4) "zyn"
127.0.0.1:6379> zrangebyscore myzset -inf +inf withscores
1) "lth"
2) "1"
3) "xf"
4) "2"
5) "zsq"
6) "3"
7) "zyn"
8) "4"
Copy the code

Gets the number of members in a range in a zset

127.0.0.1:6379> zcount myzset 1 2
(integer) 2
127.0.0.1:6379> zcount myzset 1 3
(integer) 3
Copy the code

Sort from largest to smallest

127.0.0.1:6379> zrevrange myzset 0 - 1
1) "zyn"
2) "zsq"
3) "xf"
4) "lth"
Copy the code

Gets the number in the zset element

127.0.0.1:6379> zcard myzset
(integer) 4
Copy the code

Three special types

Geospatial

Application: geographical location query distance, nearby people, takeaway delivery distance test site: latitude and longitude distance test

Add geographic location

127.0.0.1:6379> geoadd china:city 116.4 39.9 beijing
(integer) 1
127.0.0.1:6379> geoadd china:city 119.9 34.7 changzhou
(integer) 1
127.0.0.1:6379> geoadd china:city 120.2 31.5 wuxi
(integer) 1
127.0.0.1:6379> geoadd china:city 120.1 30.3 hangzhou
(integer) 1
Copy the code

Gets the added Beijing location

127.0.0.1:6379> geopos china:city beijing
1) 1) "116.39999896287918091"
   2) "39.90000009167092543"
Copy the code

Gets the distance between two cities

127.0.0.1:6379> geodist china:city changzhou wuxi km
"357.0194"
Copy the code

Look for a 1000km radius of 110 and 30

127.0.0.1:6379> geodist chain:city changzhou wuxi km
(nil)
127.0.0.1:6379> geodist china:city changzhou wuxi km
"357.0194"
127.0.0.1:6379> georadius china:city 110 30 400 km
(empty array)
127.0.0.1:6379> georadius china:city 110 30 500 km
(empty array)
127.0.0.1:6379> georadius china:city 110 30 100 km
(empty array)
127.0.0.1:6379> georadius china:city 110 30 1000 km
1) "hangzhou"
2) "wuxi"
Copy the code

Finds other elements around any specified element. The instruction here means to find a city within a radius of 400 kilometers, focusing on Changzhou

127.0.0.1:6379> georadiusbymember china:city changzhou 400 km
1) "changzhou"
2) "wuxi"
Copy the code

BitMaps

Application: Punch in, check in, only exist in two states can use this.

Set the state of

127.0.0.1:6379> setbit signs 0 0
(integer) 0
127.0.0.1:6379> setbit signs 1 0
(integer) 0
127.0.0.1:6379> setbit signs 2 1
(integer) 0
127.0.0.1:6379> setbit signs 2 1
(integer) 1
127.0.0.1:6379> setbit signs 3 1
(integer) 0
127.0.0.1:6379> setbit signs 4 1
(integer) 0
127.0.0.1:6379> setbit signs 5 1
(integer) 0
127.0.0.1:6379> setbit signs 6 1
(integer) 0
Copy the code

Take the number of states

127.0.0.1:6379> bitcount signs
(integer) 5
Copy the code

Hayperloglog

Applications: The memory footprint of this type is extremely small. So think about that. It is highly recommended if fault tolerance is allowed

Add elements

127.0.0.1:6379> pfadd haplog hs
(integer) 1
127.0.0.1:6379> pfadd haplog lth
(integer) 1
127.0.0.1:6379> pfadd haplog xf
(integer) 1
127.0.0.1:6379> pfcount haplog
(integer) 3
Copy the code

Merge two sets of data without duplication

127.0.0.1:6379> pfadd haplogs xxxlll
(integer) 1
127.0.0.1:6379> pfmerge haplog haplogs
OK
127.0.0.1:6379> keys *
1) "haplog"
2) "haplogs"
127.0.0.1:6379> pfcount haplog
(integer) 4
127.0.0.1:6379> pfcount haplogs
(integer) 1
Copy the code

Config File Introduction

Redis is case-insensitive and does not discriminate

units are case insensitive so 1GB 1Gb 1gB are all the same.
Copy the code

Can contain multiple configuration files, which are used when building clusters

# include /path/to/local.conf
# include /path/to/other.conf
Copy the code

Gateway configuration, IP address configuration, protection mode, port number

# bind 192.168.1.100 10.0.0.1 # listens on two specific IPv4 addresses
# bind 123.56.240.222 # listens on loopback IPv4 and IPv6
# protected-mode yes
# port 6379
Copy the code

The common configuration is daemon-threaded, that is, the difference between running in the background and running in the foreground. The second is managing the daemons

daemonize yes

supervised no 
Copy the code

If running in the future, you must specify a PID file.The PID file is used to prevent multiple process copies from being started. The principle is that a file lock will be added to the. Pid file after the process runs. Only the process that obtains the lock has the write permission (F_WRLCK) to write its OWN PID into the file. Other processes that attempt to acquire the lock exit automatically. The log level

The log levels are divided into noTIC, debug, verbose, and Warning. The default Redis is verbose. Debug can be used in development and test, and notice is generally used in production mode

  1. Debug: Prints a lot of information for development and testing
  2. Verbose: Contains a lot of less useful information, but is cleaner than debug
  3. Notice: Applicable to production mode
  4. Warning: Indicates a warning

Log File Path

logfile /usr/local/redis/var/redis.log
Copy the code

The default value of the database is 16. The second meaning is that the output logo is not displayed when running

databases 16

always-show-logo no
Copy the code

Snapshot configuration, if there is a key change within 3600 seconds to persist

# save 3600 1
# save 300 100
# save 60 10000
Copy the code

RDB file, whether to continue execution after persistence error, whether to compress RDB file,

stop-writes-on-bgsave-error yes

rdbcompression yes
Copy the code

Error checking is performed when saving the RDB file

rdbchecksum yes
Copy the code

RDB directory for storing backup files

dir /usr/local/redis/var
Copy the code

The cluster configuration

# replicaof <masterip> <masterport>
Copy the code

Transactions (with internal locks)

Transaction grammar

Watch key1 key2... : Monitors one or more keys and interrupts a transaction if the monitored key is changed by another command before the transaction executes (similar to optimistic locks) multi: marks the start of a transaction block (queued) exec: Discard: deletes all commands in a transaction block. Unwatch: deletes watch from monitoring all keysCopy the code

Transaction execution code (all fine)

127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 v1
QUEUED
127.0.0.1:6379(TX)> set k2 v2
QUEUED
127.0.0.1:6379(TX)> exec
1) OK
2) OK
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379> get k2
"v2"
Copy the code

Transaction execution code (one instruction is wrong)

127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set q1 11
QUEUED
127.0.0.1:6379(TX)> set q2 33
QUEUED
127.0.0.1:6379(TX)> set q3 44 op
QUEUED
127.0.0.1:6379(TX)> exec
1) OK
2) OK
3) (error) ERR syntax error
127.0.0.1:6379> mget q1 q2
1) "11"
2) "33"
Copy the code

Redis transactions are a bit special, with databases and applications being a bit special. Transactions in MySQL are either all or nothing, and the same is true for transactions in your program either all or nothing. In Redis, all transactions are executed correctly, and all errors are reported without affecting the execution.

Redis does not guarantee atomicity and there is no concept of isolation levels for transactions

Summary: REDis transactions are one-off, sequential, and exclusive executions of a series of commands in a queue

Transaction locks are optimistic locks and pessimistic locks. As the name implies, pessimistic lock is any operation I lock, I don’t trust anyone. Optimistic lock is just the opposite, it is mainly to trust anyone, it will not add the lock operation, but when opened to give him a watch operation, that is, to monitor this value. The transaction will be executed to determine whether this value is consistent with the value of the transaction started. If not, the transaction will not be executed.

Execute the sampleHere temporarily do not do distributed, directly open two Windows simulation, that is, open two threads meaning First set a value of 66 on the first form, then watch to monitor the value change, and start the transaction add the value of 44 in the transaction, that is, 66+44 should be 100 ② : Do not execute the transaction at this time, the second operation, change the value

③ : W1 is now 67, not 66, so the transaction in the first window should not execute successfully. The reason is that the value has been changed

This is optimistic lock, concrete optimistic lock implementation principle, LATER I will discuss my own ideas, here is talking about Redis, not optimistic lock.

AOF, RDB persistence

This is the RDB backup file. The configuration of this file is determined from dbfilename in config This is the RAOF backup file. The configuration of this file is determined by appendfilename in config The difference between RDB and AOF is that RDB adopts the form of snapshot and directly copies all backup files, while AOF adopts the mode of append.

Redis checks whether data needs to be restored in the dump. RDB and appendOnly files at startup. If the file is deleted, a file is automatically created.

There is also a tool involved in restoring data. This tool is used to repair instructions in backup files. I will describe the common tools below

The cluster

The cluster structures,

When it comes to clustering, I think we are not unfamiliar with it, which is often referred to as high availability. When it comes to high availability, it cannot be separated from 3V3. Massive, diverse, real-time. High concurrency, high performance, high availability.

First of all, because it is a cluster Redis, so at least 3 computers must be used, I here in order to simplify some, I will use a server, 3 Redis configuration to start 3 Redis. It’s the same thing as multiple computers.The three Windows correspond to three RedisThe operation here is to modify the configuration of the three configuration files in the three Windows, port 6379,6380,6381 respectively. Then start each of the three Redis instances.Check the running status of the three instances and confirm that 6379,6380,6381 is correct.

View the status of the current library through the Info Replication command, where role is the current role and Connected_Slaves is the sublibrary

To build a cluster, you only need to configure the slave library, not the master library

Configure master and slave libraries by instruction

This is the command to build a cluster operation. It’s all configuration scaffolding in real life. Go to redis.conf and change the configuration for replication. The change method is explained above: IP +port. Change requirePass about five lines below if you have a password.

The guard mode

The technical method of primary/secondary switchover is: when the primary server breaks down, you need to manually switch a secondary server to the primary server, which requires manual intervention, laborious, and will cause service unavailability for a period of time. This is not a recommended approach, and more often than not, we prefer sentinel mode.

Sentinel mode is a special mode, first of all Redis provides sentinel command, sentinel is an independent process, as a process, it will run independently. The idea is that sentry monitors multiple instances of Redis running by sending commands and waiting for the Redis server to respond.

The last image is the entrance to sentry mode, which is a separate process. Once the main library goes down, the Sentry process automatically votes for a new host.

Introduction to Common Tools

Introduce common tools for Redis

Pressure testing tools: redis-benchmark AOF file inspection tool: redis-check-aof RDB file check tool: redis-check-rdb Sentry mode monitoring tools:Redis-sentinel (cluster will be covered there)

Take a look at the data from the stress test tool. Refer to understand can.

Release subscription

Publish subscriptions must be two Windows or more Redis to be tested. Example 1: The first window is responsible for subscribing to a channel called Huanshao. After subscribing, it is in the waiting state. ② : The second window is used for sending messages. Send a HelloWord to the Channel Huanshao and the first window automatically receives it

This is the publication and subscription of Redis. The application scenario of this is similar to that of wechat public account sending articles. The blogger uploads a new article, and all the subscribers of this public account can receive the data of this article

Cache penetration, cache avalanche, cache warming, cache degradation

Please go to the first Redis entry for a detailed illustration

Cache penetration: Cache penetration: The general process of accessing the cache. If the queried commodity data exists in the cache, it is returned directly. If the item data does not exist in the cache, access the database. Due to improper business function implementation, or external malicious attacks constantly request some non-existent data memory, because the cache does not save the data, resulting in all requests will fall on the database, the database may bring certain pressure, or even crash.

Cache avalanche: When the cache is restarted or a large amount of cache fails in a certain period of time, a large amount of traffic directly accesses the database, causing pressure on the DB, which leads to DB failure and system crash. For example, we are preparing a sales promotion operation activity, during which a large number of commodity information, inventory and other related information will be queried. To avoid the stress of the commodity database, store the commodity data in the cache. Unfortunately, a large number of hot item caches expired at the same time during the buying campaign, resulting in a large amount of query traffic on the database. There is a lot of pressure on the database.

Cache preheating: Cache preheating means that relevant cache data is directly loaded to the cache system after the system goes online. This avoids the problem of querying the database and then caching the data when the user requests it. Users directly query cached data that has been preheated. If there is no preheating, then Redis initial state data will be empty. At the early stage of system on-line, high concurrent traffic will access the database, causing traffic pressure to the database. Cache degradation: In the case of cache failure or cache service failure, we will not access the database. We access the in-memory partial data cache directly or return the default data directly. For example, the home page of an application is usually a place with a large number of visits, and it often contains the display information of some recommended products. These recommended goods are stored in the cache. At the same time, in order to avoid the abnormal situation of the cache, the hot goods data is also stored in the memory. At the same time, some default product information is retained in memory.

Cache penetration solution: A simple solution to cache penetration is to store non-existent data access results in the cache to avoid cache access penetration. Finally, access results of non-existent commodity data are also cached. Effectively avoid the risk of cache penetration

Cache Avalanche solution

  1. Classify goods according to the heat of category, and the cache period of goods in the categories that are purchased more is longer, and the items that are relatively unpopular are purchased

Goods, the cache cycle is shorter; 2. When setting the specific cache effective time of goods, add a random interval factor, such as between 5 and 10 minutes to randomly select the expiration time; 3. Estimate the DB capacity in advance. If the cache fails, the database can withstand the traffic pressure to a certain extent

Cache preheating solution:

  1. When the amount of data is not large, loading cache action is carried out when the project is started.
  2. When there is a large amount of data, set a scheduled task script to refresh the cache.
  3. If the amount of data is too large, ensure that hotspot data is loaded to the cache in advance

After writing articles in recent days, I have gained a closer understanding of Redis. I will share the details of the project as scheduled in the application later! Thank you for your support. Wechat public number