Redis
1. Overview of NoSQL
1.1、为什么要用Nosql
1, standalone MySQL era!
In the 90’s, a basic website visit is usually not too much, a single database is quite enough!
At that time, the more static pages to use Html ~ the server is not too much pressure at all!
Think about this situation: What is the bottleneck of the entire site?
- If the amount of data is too large, a machine can not put!
- Data index (B+ Tree), a machine memory is not fit
- Traffic (read/write mixed), a server can not withstand ~
As soon as you start to do one of these three things, you need to advance!
2, Memcached + MySQL + vertical split
Website 80% of the situation is reading, every time to query the database is very troublesome! So if we want to reduce the pressure of data, we can use caching to ensure efficiency!
Development process: Optimized data structure and indexing –> File caching (IO) –> Memcached (the hottest technology of the time!)
3, database sub-table + horizontal split + MySQL cluster
With the development of technology and business, the requirements for people are getting higher and higher!
Essence: Database (read, write)
MyISAM: table lock () used in the early years, very inefficient! High concurrency can lead to serious locking problems
Switch to Innodb: row locking
Slowly began to use sub – database sub – table to solve the pressure of writing! MySQL introduced table partitioning in that era! But not many companies use this technology!
MySQL cluster, well suited for all the needs of the day!
4. In the recent past
The world has changed dramatically between 2010 and 2020; (Positioning, is also a kind of data, music, hot list!)
Relational databases such as MySQL are not enough! It’s a lot of data and it changes fast!
MySQL sometimes uses it to store some large files, blogs, images! Database tables are large and inefficient! If there is a database dedicated to this kind of data, MySQL is much less stressed (learn how to handle these problems!). Under the IO pressure of big data, tables are almost impossible to change!
Current a basic Internet project!
Why NoSQL!
User profile, social network, location. User-generated data, user logs and so on have exploded!
At this time we need to use NoSQL database, NoSQL can be very good to deal with the above situation!
1.2. What is NoSQL
NoSQL
NoSQL = Not Only SQL
Relational databases: tables, rows, columns
Refers to non – relational database, with the birth of Web2.0 Internet! Traditional relational databases can hardly cope with the web2.0 era! Especially large, highly concurrent communities! Exposed a lot of difficult to overcome problems, NoSQL in today’s big data environment is developing very fast, Redis is the fastest development, and we must master a technology! There are many types of data users’ personal information, social networks, geographic locations. The storage of these data types does not require a fixed format!
Do not need redundant operations can be horizontal expansion! Map<String,Object> use key-value pairs to control!
No characteristic
Decoupling!
-
Easy to expand (there is no relationship between data, very good expansion!)
-
Large amount of data performance (Redis write 80,000 times a second, read 110,000, NoSQL cache record level, is a fine-grained cache, performance will be relatively high!)
-
Data types are diverse! (No prior database design required! Take it and use it! If the data is very large table, many people can not design!
-
Traditional RDBMS and NoSQL
Traditional RDBMS - Structured organization - SQL - Data and relationships exist in separate tables ROW COL - operation operations, data definition language - strict consistency - Basic transactions -.....Copy the code
Nosql - Not just data - No fixed query language - key-value pair storage, column storage, document storage, graphical database (social relationship) - ultimate consistency, - CAP theorem and BASE (Remote Live) junior architect! (Crazy god concept: as long as the death of learning, learning to death!) - High performance, high availability, and high scalability -....Copy the code
Understanding: 3V+3 high
The 3V in the era of big data: It mainly describes problems
- Mass Volume
- Multiple segments
- Real-time Velocity
The three high points in the era of big data: mainly the requirements for procedures
- High concurrency
- High scalability
- A high performance
Real practice in the company: NoSQL + RDBMS used together is the strongest, Alibaba’s architecture evolution!
There is no high or low technology, it depends on how you use it! (Improve internal skills, improve thinking!)
1.3 Analysis of Alibaba’s evolution
Question to consider: Is all this stuff in one database?
Technology can not be urgent, the more slowly to learn, to the more solid!
Open source is king of technology!
Any Internet company, it is impossible to simply let users use it!
A lot of companies are doing the same thing; (Competitive Product Agreement)
With this competition, the business is getting better and better, and the requirements for developers are getting higher and higher!
If you’re going to be an architect: There’s nothing that can’t be solved by adding a layer!
# 1. Basic information of the productName, price, business information; Relational databases can be solved! MySQL/Oracle (taobao early to IOE! - Wang Jian: Recommended article: Aliyun's maniacs: 40 minutes counts! Taobao internal MySQL is not everyone's MySQL# 2. Product description and comment (more text)In a document database, MongoDB# 3. PicturesDistributed file system FastDFS - Taobao's own TFS - Gooale's GFS - Hadoop HDFS - Ali Cloud OSS# 4. Product keywords (search)- Search engine Solr ElasticSearch - ISerach: Doron (learn more about these tech gurus!) All cattle force people have a helpless painful years! But as long as you like SB to insist, will eventually cow force!# 5. Popular band information of commodities- Memory database - Redis Tair, Memache...# 6. Transaction of goods, external payment interface- Tripartite applicationsCopy the code
To know, a simple web page behind the technology must not be as simple as we think!
Large Internet application issues:
- Too many data types!
- Lots of data sources, lots of refactoring!
- The data needs to be reworked. A lot of it?
Solve a problem:
Here above is NoSQL introduction overview, can not only improve your knowledge, but also help you understand the work of the factory!
1.4 Four categories of NoSQL
KV key pair:
-
Sina: Redis
-
Meituan: Redis + Tair
-
Alibaba, Baidu: Redis + memecache
Document database (bson format and JSON format) :
-
MongoDB (generally mandatory)
-
MongoDB is a database based on distributed file storage, written in C++, mainly used to process a large number of documents!
-
MongoDB is an intermediate product between relational and non-relational databases! MongoDB is the most functional and relational database like non-relational database!
-
-
ConthDB
Column storage database
-
HBase
-
Distributed file system
Graph relational database
- He is not save graphics, put is the relationship, such as: circle of friends, social network, advertising recommendation!
- Secondary, InfoGrid;
Four contrast!
The heart of fear can make people progress! The universe! Science fiction!
The meaning of life? Pursue happiness (help others, gratitude), explore the unknown (study hard, don’t be abandoned by the society)
2. Introduction to Redis
2.1 overview,
What is Redis?
Redis (Remote Dictionary Server)
Is an open source, network-enabled, memory-based and persistent logging, key-value database written in ANSI C, with multiple language apis.
Redis periodically writes the updated data to disk or the modified operation to the appended record file, and implements master-slave synchronization on this basis.
Free and open source! NoSQL is one of the hottest NoSQL technologies! Also known as structured databases!
What can Redis do?
- Memory storage, persistence, in memory is power or lose, so persistence is important (RDB, AOF)
- High efficiency, can be used for caching
- Publish and subscribe system
- Map information analysis
- Timers, counters (views!)
- .
features
1. Diverse data types
2. Persistence
3, the cluster
4, transaction
Things you need to learn
1. Official website: redis. IO /
3. Download address: download from the official website!
Note: Wdinow is available on Github (longer suspended!).
Redis recommendation is built on Linux server, we are based on Linux learning!
2.2. Windows Installation
1. Download the installation package: github.com/dmajkic/red…
2. Get the compressed package after downloading:
Decompression to their own computer environment directory can be! Redis is very small, only 5M
4, Start Redis, double-click to run the service!
5. Use the Redis customer list to connect to Redis
Keep in mind that Windows is easy to use, but Redis recommends using Linux for development!
2.3. Linux Installation
1. Download the installation package! Redis – 5.0.8. Tar. Gz
2. Unpack the Redis installation package! Application/opt
-
The tar - ZXVF redis - 6.0.10. Tar. GzCopy the code
3, enter the decompressed file, you can see our redis configuration file
4. Basic environment installation
yum install gcc-c++ Error if redis 6.0 or later is installed
make
make install
Copy the code
The default redis installation path is /usr/local/bin
6. Redis configuration file. Copy to our current directory
Redis is not started in the background by default, modify the configuration file!
8. Start Redis service!
9. Use Redis-CLI for connection testing!
Check whether the redis process is started.
11. How to disable Redis service? shutdown
12. Check whether the process exists
13. Later we will use single-machine multi-Redis to start cluster testing!
2.4. Test performance
Redis-benchmark is a stress test tool!
Official performance testing tools!
Redis-benchmark command parameter!
Pictures from the rookie tutorial:
Let’s take a quick test:
# Test: 100 concurrent connections 100000 requests
redis-benchmark -h localhost -p 6379 -c 100 -n 100000
Copy the code
How do you look at these analyses?
2.5. Basic knowledge
Redis has 16 databases by default
The default is 0
You can switch databases using SELECT!
127.0.0.1:6379 > select 3Switch databaseOK 127.0.0.1:6379 [3] > DBSIZE# check DB size!
(integer) 0
Copy the code
127.0.0.1:6379 [3] > keys *SQL > select * from key1) " name"Copy the code
-
Clear the current flushdb database
-
FLUSHALL clears the contents of the entire database
127.0.0.1:6379 [3] > flushdb# Know the current database
OK
127.0.0.1:6379[3]> keys *
(empty list or set)
Copy the code
Consider: Why is Redis 6379! Fan effect! (Just read about it!)
Redis is single threaded!
Understand Redis is fast, official said, Redis is based on memory operation, CPU is not Redis performance bottleneck, Redis bottleneck is based on the machine’s memory and network bandwidth, since you can use a single thread to achieve, the use of a single thread! So you use a single thread!
Redis is written in C language, and the official data provided is 100000+ QPS, not worse than Memecache which also uses key-vale!
Why is Redis single thread so fast?
Myth 1: A high-performance server must be multithreaded? Myth 2: Multithreading (CPU context switches!) Must be more efficient than a single thread!
First go to CPU> memory > hard disk speed to understand!
Core: Redis is to put all the data in memory, so using a single thread to operate is the most efficient, multithreading (CPU context will switch: time-consuming operation!!) For memory systems, it is most efficient if there is no context switch! Multiple reads and writes are all on the same CPU. In memory, this is the best solution!
3. Five data types
Official document
Redis is an open source (BSD-licensed), in-memory data structure storage system that can be used as database, cache, and messaging middleware MQ. It supports many types of data structures, such as strings (hashes), lists (Lists), sets (sets), sorted sets (sorted sets) and range queries, bitmaps, Hyperloglogs and Geospatial index radius query. Redis is built with replication, LUA scripting, LRU eviction, transactions and different levels of disk persistence, And provides high availability through Redis Sentinel and Automated partitioning (Cluster).
3.1, Redis – Key
- Keys * View all keys
- set name kuangshen set key value
- EXISTS name Determines whether the current key EXISTS
- Move Name 1 Remove the current key
- EXPIRE Name 10 Sets the key expiration time, in seconds
- TTL name Indicates the remaining time of the current key
- Type Name Displays the type of the current key
127.0.0.1:6379 > keys *# check all keys
(empty list or set) 127.0.0.1:6379 >set name kuangshen # set keyOK 127.0.0.1:6379> keys * 1)" name" 127.0.0.1:6379 >setAge 1 OK 127.0.0.1:6379> keys * 1)" age" 2) " name" 127.0.0.1:6379 > EXISTS the nameCheck whether the current key exists
(integer) 1
127.0.0.1:6379> EXISTS name1
(integer) 0
127.0.0.1:6379> move name 1 Remove the current key
(integer) 1 127.0.0.1:6379> keys * 1)" age" 127.0.0.1:6379 >setName Qinjiang OK 127.0.0.1:6379> keys * 1) "; age" 2) " name" 127.0.0.1:6379> clear 127.0.0.1:6379> keys * 1) "127.0.0.1:6379> clear 127.0.0.1:6379> keys * 1)" age" 2) " name" 127.0.0.1:6379 > get the name & quot; qinjiang" 127.0.0.1:6379 > EXPIRE name 10Set the key expiration time in seconds
(integer) 1
127.0.0.1:6379> ttl name # check the remaining time of the current key
(integer) 4
127.0.0.1:6379> ttl name
(integer) 3
127.0.0.1:6379> ttl name
(integer) 2
127.0.0.1:6379> ttl name
(integer) 1
127.0.0.1:6379> ttl name
(integer) -2
127.0.0.1:6379> get name
(nil)
127.0.0.1:6379> type name # check a type of the current key!String 127.0.0.1:6379 >type age
string
Copy the code
If you do not meet the command, you can view the help document on the official website!
3.2. String (String)
90% of Java programmers using Redis use only one String!
-
Set Key1 v1 Specifies the value
-
Get key1 gets the value
-
Keys * Gets all keys
-
EXISTS key1 Determines whether a key EXISTS
-
APPEND key1 “hello” appends a string, equivalent to setKey if the current key does not exist
-
STRLEN key1 gets the length of the string!
-
Incr views Incremented by 1
-
Decr views decreases by 1
-
INCRBY Views 10 allows you to set step size and increment!
-
DECRBY Views 5 Specifies decrement
-
GETRANGE key1 0 3 Intercepts string [0,3]
-
GETRANGE key1 0 -1 Gets all strings the same as get keys
-
SETRANGE key2 1 xx replaces the string at the start of the specified position!
-
Setex key3 30 “Hello” Sets the value of key3 to Hello and expires after 30 seconds
- Setex (set with expire) Sets the expiration time
-
Setnx mykey “redis” if mykey does not exist, create mykey, if mykey exists, create failed!
- Setnx (set if not exist) does not exist in Settings (often used in distributed locks!)
-
Mset k1 v1 K2 v2 k3 v3
-
Mget k1 k2 k3 gets multiple values simultaneously
-
Msetnx k1 v1 k4 v4 msetnx is an atomic operation that either succeeds together or together
-
mset user:1:name zhangsan user:1:age 2
-
Set user:1 {name:zhangsan,age:3} # set user:1 {name:zhangsan,age:3}
The key is a clever design: user:{id}:{filed}, so the design is completely OK in Redis!
-
-
mget user:1:name user:1:age
-
Getset db redis returns nil if no value exists and creates a key-value, if there is a value, gets the original value and sets the new value
- Getset first get and then set
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379 >set key1 v1 # set valuesOK 127.0.0.1:6379 > get key1# get value" v1" 127.0.0.1:6379 > keys *Get all keys1) " key1" 127.0.0.1:6379 > EXISTS key1Check whether a key exists
(integer) 1 127.0.0.1:6379> APPEND Key1 "; hello"# append string, equivalent to setkey if current key does not exist
(integer) 7 127.0.0.1:6379> get key1 "; v1hello" 127.0.0.1:6379 > STRLEN key1Get the length of the string!
(integer) 7 127.0.0.1:6379> APPEND Key1 "; ,kaungshen" (integer) 17
127.0.0.1:6379> STRLEN key1
(integer) 17 127.0.0.1:6379> get key1 "; v1hello,kaungshen"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# i++
I + = # step length127.0.0.1:6379 >set views 0 # initial page views are 0OK 127.0.0.1:6379> get views "; 0" 127.0.0.1:6379 > incr views# increase 1 pageviews to 1
(integer1 127.0.0.1:6379> incr views (integer) 2 127.0.0.1:6379> Get views "; 2" 127.0.0.1:6379 > decr views# Decrease page views by 1
(integer1 127.0.0.1:6379> Decr views (integer0 127.0.0.1:6379> decr views (integer) -1 127.0.0.1:6379> Get views "; -1" 127.0.0.1:6379 > INCRBY views 10# you can set the step size, specify the increment!
(integer9 127.0.0.1:6379> INCRBY views 10 (integer) 19
127.0.0.1:6379> DECRBY views 5
(integer14)# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# string range127.0.0.1:6379 >setkey1 " hello,kuangshen"# set key1OK 127.0.0.1:6379> get key1 "; hello,kuangshen" 127.0.0.1:6379> GETRANGE key1 0 3# intercept string [0,3]
"hell"
127.0.0.1:6379> GETRANGE key1 0 -1 # Get the entire string as well as get key" hello,kuangshen"# replace!127.0.0.1:6379 >setKey2 abcdefg OK 127.0.0.1:6379> get key2 "; abcdefg" 127.0.0.1:6379> SETRANGE KEY2 1 xx# Replace the string at the beginning of the specified position!
(integer) 7 127.0.0.1:6379> get key2 "; axxdefg"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# setex (set with expire) # set expiration time
# setnx (set if not exist) # set if not exist127.0.0.1:6379> setex key3 30 "; hello"# Set key3 to hello and expire after 30 seconds
OK
127.0.0.1:6379> ttl key3
(integer) 26
127.0.0.1:6379> get key3
"hello"
127.0.0.1:6379> setnx mykey "redis" If mykey does not exist, create mykey
(integer) 1 127.0.0.1:6379> keys * 1)" key2" 2) " mykey" 3) " key1" TTL key3 (127.0.0.1:6379 >integer) -2 127.0.0.1:6379> setnx myKey "; MongoDB"# if mykey exists, create failed!
(integer) 0 127.0.0.1:6379> Get myKey "; redis"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Mset mget 127.0.0.1:6379> mset k1 v1 k2 v2 k3 v3# set multiple values simultaneouslyOK 127.0.0.1:6379> keys * 1)" k1" 2) " k2" 3) " k3" 127.0.0.1:6379> mget k1 k2 k3Get multiple values at once1) " v1" 2) " v2" 3) " v3" 127.0.0.1:6379> msetnx k1 v1 k4 v4# msetnx is an atomic operation that either succeeds together or togetherFailure! (integer) 0
127.0.0.1:6379> get k4
(nil)
# object
set user:1 {name:zhangsan,age:3} # Set a user:1 object value to json character to save an object!
Filed} user:{id} {filed} user:{id} {filed}127.0.0.1:6379> mset user:1:name zhangsan user:1:age 2 OK 127.0.0.1:6379> mget user:1:name user:1:age 1)" zhangsan" 2) " 2"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
getset Get then set127.0.0.1:6379 > getset db redis# If no value exists, return nil and create a key-value(nil) 127.0.0.1:6379> get db "; Redis 127.0.0.1:6379> getSet db mongodbIf there is a value, get the original value and set the new value" redis" 127.0.0.1:6379 > get db & quot; mongodb"Copy the code
The data structure is the same!
String is used in a similar way: value can be our number as well as our String!
- counter
- Count the number of multiple units
- Number of fans
- Object cache storage!
A List b List C List D List
Basic data types, lists
In Redis, we can play lists as stacks, queues, blocking queues!
All list commands start with l and Redis does not discriminate between large and small commands
- LPUSH list one # Insert one or more values into the head of the list (left)
- LRANGE list 0 -1
- LRANGE list 0 1
- Rpush list righr # Insert one or more values into the list bit (right)
- Lpop List # Removes the first element of the list
- Rpop list # Removes the last element of the list
- Lindex list 1 # retrieve a value from a list by subscript!
- Llen list # returns the length of the list
- Lrem list 1 one # Remove a specified number of values from the list
- Ltrim myList 1 2 # truncated by subscript, the list has been changed, only truncated elements are left!
- Rpoplpush myList myotherList # remove the last element of the list and move it to a new one
- EXISTS # Determine if this list EXISTS
- Lset list 0 item # Error if there is no list, if there is, update the current index value
- LINSERT mylist before “world” “other”
- LINSERT myList after “world” “new” # Insert a specific value before or after an element in your column!
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379 > LPUSH list oneInsert one or more values into the head of the list (left)
(integer) 1
127.0.0.1:6379> LPUSH list two
(integer) 2
127.0.0.1:6379> LPUSH list three
(integer) 3 127.0.0.1:6379> LRANGE list 0-1Get the value from list!1) " three" 2) " two" 3) " one" 127.0.0.1:6379> LRANGE list 0 1Get a specific value from the interval!1) " three" 2) " two" 127.0.0.1:6379 > Rpush list righrInsert one or more values into the list bit (right)
(integer) 4 127.0.0.1:6379> LRANGE list 0-1 1)" three" 2) " two" 3) " one" 4) " righr"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
LPOP
RPOP
127.0.0.1:6379> Lpop list Remove the first element of the list" three" 127.0.0.1:6379 > Rpop listRemove the last element of the list" righr" 127.0.0.1:6379> LRANGE list 0-1 1)" two" 2) " one"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Lindex 127.0.0.1:6379> LRANGE list 0-1 1) "Lindex 127.0.0.1:6379> LRANGE List 0-1 1)" two" 2) " one" 127.0.0.1:6379 > lindex list 1Get a value from a list by subscript!" one" 127.0.0.1:6379> lindex list 0 "; two"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Llen
127.0.0.1:6379> Lpush list one
(integer) 1
127.0.0.1:6379> Lpush list two
(integer) 2
127.0.0.1:6379> Lpush list three
(integer) 3
127.0.0.1:6379> Llen list Return the length of the list
(integer) 3
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Removes the specified value! Select "uid Lrem 127.0.0.1:6379> LRANGE list 0-1 1)"; three" 2) " three" 3) " two" 4) " one" 127.0.0.1:6379> lrem list 1 one# remove the specified number of values from the list
(integer) 1 127.0.0.1:6379> LRANGE list 0-1 1)" three" 2) " three" 3) " two" 127.0.0.1:6379> lrem list 1 three (integer) 1 127.0.0.1:6379> LRANGE list 0-1 1)" three" 2) " two" 127.0.0.1:6379> Lpush list three (integer) 3
127.0.0.1:6379> lrem list 2 three
(integer) 2 127.0.0.1:6379> LRANGE list 0-1 1)" two"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Trim the trim. ; Cutting the list! 127.0.0.1:6379> keys * (empty list orset) 127.0.0.1:6379> Rpush myList "; hello" (integer) 1 127.0.0.1:6379> Rpush myList "; hello1" (integer) 2 127.0.0.1:6379> Rpush myList "; hello2" (integer) 3 127.0.0.1:6379> Rpush myList "; hello3" (integer) 4 127.0.0.1:6379> ltrim myList 12The list has been changed and truncated by subscripting the specified length, leaving only the truncated elements!OK 127.0.0.1:6379> LRANGE myList 0-1 1)" hello1" 2) " hello2"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
rpoplpush Remove the last element of the list and move it to the new list!127.0.0.1:6379 > rpush mylist & quot; hello" (integer) 1 127.0.0.1:6379> rpush myList "; hello1" (integer) 2 127.0.0.1:6379> rpush myList "; hello2" (integer) 3
127.0.0.1:6379> rpoplpush mylist myotherlist Remove the last element of the list and move it to the new list!
"hello2"
127.0.0.1:6379> lrange mylist 0 -1 # View the original list1) " hello" 2) " hello1" 127.0.0.1:6379> lrange myOtherList 0-1# Check the target list, there is indeed a change value!1) " hello2"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Lset replaces the index value in the list with another value, updating the operation 127.0.0.1:6379> EXISTS listCheck whether the list exists
(integer) 0
127.0.0.1:6379> lset list 0 item If the list does not exist, an error will be reported when we try to update it
(error) ERR no such key
127.0.0.1:6379> lpush list value1
(integer) 1 127.0.0.1:6379> LRANGE List 0 0 1)" value1" 127.0.0.1:6379> lset list 0 itemUpdate the value of the current subscript if it existsOK 127.0.0.1:6379> LRANGE list 0 0 1)" item" 127.0.0.1:6379> lset list 1 other# if it does not exist, an error is reported!
(error) ERR index out of range
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
linsert Insert a specific value before or after an element in your column!127.0.0.1:6379 > Rpush mylist & quot; hello" (integer) 1 127.0.0.1:6379> Rpush myList "; world" (integer) 2 127.0.0.1:6379> LINSERT mylist before "; world" " other" (integer) 3 127.0.0.1:6379> LRANGE MyList 0-1 1)" hello" 2) " other" 3) " world" 127.0.0.1:6379> LINSERT myList after world new (integer) 4 127.0.0.1:6379> LRANGE MyList 0-1 1)" hello" 2) " other" 3) " world" 4) " new"Copy the code
summary
- It is actually a linked list, before Node after, left, right can insert values
- If the key does not exist, create a new linked list
- If the key exists, add the content
- If you remove all the values, an empty list also doesn’t exist!
- Insert or change values on both sides for maximum efficiency! Intermediate elements are relatively inefficient
Message queue! Message queue (Lpush Rpop), stack (Lpush Lpop)!
3.4, Set (Set)
Values in a set are unreread!
- List is an ordered list with repeatable values
- Set is an unordered list with non-repeatable values
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379 > sadd myset & quot; hello"Add elements to the set
(integer) 1 127.0.0.1:6379> sadd mySet "; kuangshen" (integer) 1 127.0.0.1:6379> sadd mySet "; lovekuangshen" (integer) 1
127.0.0.1:6379> SMEMBERS myset Set to view all values of the specified set1) " hello" 2) " lovekuangshen" 3) " kuangshen" 127.0.0.1:6379 > SISMEMBER myset helloCheck whether a value is in a set.
(integer) 1
127.0.0.1:6379> SISMEMBER myset world
(integer) 0
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379 > scard mysetSelect * from set;
(integer4)# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Rem 127.0.0.1:6379> srem myset helloRemoves the specified element from the set
(integer) 1
127.0.0.1:6379> scard myset
(integer) 3 127.0.0.1:6379> SMEMBERS mySet 1)" lovekuangshen2" 2) " lovekuangshen" 3) " kuangshen"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
setUnordered non-repeating collection. Take out random! 127.0.0.1:6379> SMEMBERS mySet 1)" lovekuangshen2" 2) " lovekuangshen" 3) " kuangshen" 127.0.0.1:6379 > SRANDMEMBER mysetSelect an element at random" kuangshen" 127.0.0.1:6379 > SRANDMEMBER myset & quot; kuangshen" 127.0.0.1:6379 > SRANDMEMBER myset & quot; kuangshen" 127.0.0.1:6379 > SRANDMEMBER myset & quot; kuangshen" 127.0.0.1:6379 > SRANDMEMBER myset 2Select a specified number of elements at randomBilibili: Crazy God says Java 1)" lovekuangshen" 2) " lovekuangshen2" 127.0.0.1:6379> SRANDMEMBER mySet 2 1)" lovekuangshen" 2) " lovekuangshen2" 127.0.0.1:6379 > SRANDMEMBER mysetSelect an element at random" lovekuangshen2"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Delete certain key, randomly delete key! 127.0.0.1:6379> SMEMBERS mySet 1)" lovekuangshen2" 2) " lovekuangshen" 3) " kuangshen" 127.0.0.1:6379 > spop myset# Randomly delete some elements from set!" lovekuangshen2" 127.0.0.1:6379 > spop myset & quot; lovekuangshen" 127.0.0.1:6379> SMEMBERS mySet 1)" kuangshen"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Moves a specified value to anothersetThe collection! 127.0.0.1:6379 > sadd myset & quot; hello" (integer) 1 127.0.0.1:6379> sadd mySet "; world" (integer) 1 127.0.0.1:6379> sadd mySet "; kuangshen" (integer) 1 127.0.0.1:6379> sadd myset2 "; set2" (integer) 1 127.0.0.1:6379> smove mySet myset2 "; kuangshen"Move a specified value to another set!
(integer) 1 127.0.0.1:6379> SMEMBERS mySet 1)" world" 2) " hello" 127.0.0.1:6379> SMEMBERS myset2 1)" kuangshen" 2) " set2"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Micro-blog, B station, common concern! (Union) Number set class: - Difference set SDIFF - Intersection set - Union 127.0.0.1:6379> SDIFF key1 key2# difference set1) " b" 2) " a" 127.0.0.1:6379 > SINTER key1 key2# Crossing mutual friends can be done this way1) " c" 127.0.0.1:6379 > SUNION key1 key2# and set1) " b" 2) " c" 3) " e" 4) " a" 5) " d"Copy the code
Micro blog, user A put all the people following in A set set! Put its fans in a collection too!
Common attention, common interest, second friends, recommend friends! (Six degrees of segmentation theory)
3.5. Hash
Map collection, key-map! When this value is a map set! Not much different in nature from String, a simple key-vlaue!
set myhash field kuangshen
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
127.0.0.1:6379> hset myhash field1 kuangshen # set a specific key-vlaue
(integer) 1
127.0.0.1:6379> hget myhash field1 Get a field value" kuangshen" 127.0.0.1:6379> hmset myhash field1 Hello field2 world# set multiple key-vlaue
OK
127.0.0.1:6379> hmget myhash field1 field2 Get multiple field values1) " hello" 2) " world" 127.0.0.1:6379 > hgetall myhashGet all the data,1) " field1" 2) " hello" 3) " field2" 4) " world" 127.0.0.1:6379 > hdel myhash field1Hash hash hash hash hash hash The corresponding value disappears!
(integer) 1 127.0.0.1:6379> hgeTall MyHash 1) "; field2" 2) " world"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Hlen 127.0.0.1:6379> hmset myHash field1 Hello field2 World OK 127.0.0.1:6379> HGETALL MyHash 1) "hlen 127.0.0.1:6379> hmset myHash field1 Hello field2 World OK 127.0.0.1:6379> HGETALL MyHash 1)" field2" 2) " world" 3) " field1" 4) " hello" 127.0.0.1:6379 > hlen myhashHash table hash table hash table
(integer2)# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379 > HEXISTS myhash field1# check whether the specified hash field exists!
(integer) 1
127.0.0.1:6379> HEXISTS myhash field3
(integer) 0
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Get only all fields
Get only all values127.0.0.1:6379 > hkeys myhashGet only all fields1) " field2" 2) " field1" 127.0.0.1:6379 > hvals myhashGet only all values1) " world" 2) " hello"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Incr decr 127.0.0.1:6379> hset MyHash field3 5# specify increment!
(integer) 1
127.0.0.1:6379> HINCRBY myhash field3 1
(integer) 6 127.0.0.1:6379> HINCRBY MyHash Field3-1 (integer) 5
127.0.0.1:6379> hsetnx myhash field4 hello If it does not exist, it can be set
(integer) 1
127.0.0.1:6379> hsetnx myhash field4 world # If it exists, it cannot be set
(integer) 0
Copy the code
Hash changed data user name age, especially user information, such as frequently changing information! Hash is better for objects and String is better for strings!
3.6 Zset (Ordered Set)
Set k1 v1 zset k1 score1 v1
127.0.0.1:6379> zadd myset 1 one # add a value
(integer) 1
127.0.0.1:6379> zadd myset 2 two 3 three # add multiple values
(integer) 2 127.0.0.1:6379> ZRANGE mySet 0-1 1)" one" 2) " two" 3) " three"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> zadd Salary 2500 xiaohongAdd three users
(integer) 1
127.0.0.1:6379> zadd salary 5000 zhangsan
(integer) 1 127.0.0.1:6379> Zadd Salary 500 kaungsheninteger1)# ZRANGEBYSCORE key min max127.0.0.1:6379> ZRANGEBYSCORE salary - INF + INF# display all users from small to large!1) " kaungshen" 2) " xiaohong" 3) " zhangsan" 127.0.0.1:6379> ZREVRANGE salary 0-1# Sort from largest to largest!1) " zhangsan" 2) " kaungshen" 127.0.0.1:6379> ZRANGEBYSCORE salary - INF + INF withscoresDisplay all users and attachGrade 1) & quot; kaungshen" 2) " 500" 3) " xiaohong" 4) " 2500" 5) " zhangsan" 6) " 5000" 127.0.0.1:6379> ZRANGEBYSCORE salary - INF 2500 withscores# display promotions for employees earning less than 2500Sort order! 1) " kaungshen" 2) " 500" 3) " xiaohong" 4) " 2500"# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Remove elements from REM127.0.0.1:6379> Zrange Salary 0-1 1)" kaungshen" 2) " xiaohong" 3) " zhangsan" 127.0.0.1:6379 > zrem salary xiaohongRemoves the specified element from the ordered collection
(integer) 1 127.0.0.1:6379> Zrange Salary 0-1 1)" kaungshen" 2) " zhangsan" 127.0.0.1:6379 > zcard salaryGet the number in the ordered set
(integer2)# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
127.0.0.1:6379> zadd myset 1 hello
(integer) 1
127.0.0.1:6379> zadd myset 2 world 3 kuangshen
(integer) 2
127.0.0.1:6379> zcount myset 1 3 Get the number of members in the specified range!
(integer) 3 127.0.0.1:6379> zcount myset 12 (integer2)Copy the code
Its with some API, through our learning, you have the rest of the work if there is a need, this time you can go to check the official document!
Set sort store class grade table, salary table sort!
Common message, 1, important message, 2, with weight to judge!
Ranking application implementation, take Top N test!
4. Three special data types
4.1. Geospatial
Location of friends, nearby people, taxi distance calculation?
Redis Geo is available in Redis3.2! This function can calculate geographical location information, the distance between two places, radius of several miles of people!
You can query some test data: www.jsons.cn/lngcodeinfo…
There are only six commands:
The official document: www.redis.net.cn/order/3685….
getadd
# getAdd adds the location
# Rule: Two levels can not be added directly, we usually download the city data, directly through the Java program to import once!
The effective longitude ranges from -180 degrees to 180 degrees.
The valid latitude ranges from -85.05112878 degrees to 85.05112878 degrees.
# This command will return an error if the coordinates are outside the range specified above.
# 127.0.0.1:6379> Geoadd China :city 39.90 116.40 beijin(error) ERR Invalid longitude, Latitude pair 39.900000,116.400000# key ()127.0.0.1:6379> Geoadd China: City 116.40 39.90 Beijing (integer1 127.0.0.1:6379> Geoadd China: City 121.47 31.23 Shanghai (integer) 1 127.0.0.1:6379> Geoadd China :city 106.50 29.53 chongqi 114.05 22.52 shengzhen (integer) 2 127.0.0.1:6379> Geoadd China: City 120.16 30.24 Hangzhou 108.96 34.26 xian (integer2)Copy the code
getpos
Get current position: must be a coordinate value!
127.0.0.1:6379 > GEOPOS China: the city of BeijingGet the latitude and longitude of the specified city!1) 1) " 116.39999896287918091 & quot; 2) " 39.90000009167092543 & quot; GEOPOS China: City Beijing Chongqi 1) 1)" 116.39999896287918091 & quot; 2) " 39.90000009167092543 & quot; 2) 1) " 106.49999767541885376 & quot; 2) " 29.52999957900659211 & quot;Copy the code
GEODIST
The distance between two people!
Units:
- M is in meters.
- Km is expressed in kilometers.
- Mi means miles.
- Ft is in feet.
127.0.0.1:6379> GEODIST China: City Beijing Shanghai km# Check the distance as the crow flies from Shanghai to Beijing" 1067.3788 & quot; 127.0.0.1:6379> GEODIST China: City Beijing CHONGqi kmCheck the distance as the crow flies from Chongqing to Beijing" 1464.0708 & quot;Copy the code
Georadius takes a given latitude and longitude as the center to find elements within a certain radius
People near me? (Get all nearby people’s addresses, location!) Search by radius!
The number of people who get the specified number, 200
All data should be input: China :city, to make the result more request!
127.0.0.1:6379> GEORADIUS China: City 110 30 1000 kmTake the latitude and longitude of 110,30 as the center, findFind a city within 1000km radius. chongqi" 2) " xian" 3) " shengzhen" 4) " hangzhou" 127.0.0.1:6379> GEORADIUS China: City 110 30 500 km 1)" chongqi" 2) " xian" 127.0.0.1:6379> GEORADIUS China: City 110 30 500 km withdistDisplay the position to the middle distance1) 1) " chongqi" 2) " 341.9374 & quot; 2) 1) " xian" 2) " 483.8340 & quot; 127.0.0.1:6379> GEORADIUS China: City 110 30 500 km withcoord# Display other people's location information1) 1) " chongqi" 2) 1) " 106.49999767541885376 & quot; 2) " 29.52999957900659211 & quot; 2) 1) " xian" 2) 1) " 108.96000176668167114 & quot; 2) " 34.25999964418929977 & quot; GEORADIUS China: City 110 30 500 km Withdist withcoord count 1#Filter out the specified results! 1) 1) " chongqi" 2) " 341.9374 & quot; 3) 1) " 106.49999767541885376 & quot; 2) " 29.52999957900659211 & quot; 127.0.0.1:6379> GEORADIUS China: City 110 30 500 km withdist Withcoord Count 2 1) 1)" chongqi" 2) " 341.9374 & quot; 3) 1) " 106.49999767541885376 & quot; 2) " 29.52999957900659211 & quot; 2) 1) " xian" 2) " 483.8340 & quot; 3) 1) " 108.96000176668167114 & quot; 2) " 34.25999964418929977 & quot;Copy the code
GEORADIUSBYMEMBER
Find other elements around the specified element!127.0.0.1:6379> GEORADIUSBYMEMBER China: City Beijing 1000 km 1)" beijing" 2) " xian" 127.0.0.1:6379> GEORADIUSBYMEMBER China: City Shanghai 400 km 1)" hangzhou" 2) " shanghai"Copy the code
GEOHASH command – Returns a GEOHASH representation of one or more positional elements
This command returns an 11-character Geohash string!
# Convert the latitude and longitude from two dimensions to a one-dimensional string. If the two strings are closer to each other, the closer they are!127.0.0.1:6379> GeoHash China: City Beijing Chongqi 1)" wx4fbxxfke0" 2) " wm5xzrybty0"Copy the code
The implementation principle of GEO is Zset! We can use the Zset command to manipulate geo!
127.0.0.1:6379> ZRANGE china:city 0 -1 # View all elements in the map1) " chongqi" 2) " xian" 3) " shengzhen" 4) " hangzhou" 5) " shanghai" 6) " beijing" 127.0.0.1:6379 > zrem China: the city of BeijingRemove the specified element!
(integer) 1 127.0.0.1:6379> ZRANGE China: City 0-1 1)" chongqi" 2) " xian" 3) " shengzhen" 4) " hangzhou" 5) " shanghai"Copy the code
4.2, Hyperloglog
What is the cardinal number?
A {1,3,5,7,8,7}
B,3,5,7,8 {1}
Cardinality (non-repeating elements) = 5, acceptable error!
Introduction to the
Redis 2.8.9 updates the Hyperloglog data structure!
Redis Hyperloglog cardinality statistics algorithm!
Advantages: The memory footprint is fixed, 2^64 different elements of the technology, only requires waste 12KB of memory! If you want to compare from a memory point of view Hyperloglog preferred!
UV of web pages (one person visits a site multiple times, but still counts as one person!)
Traditionally, a set stores the user’s ID, and then counts the number of elements in the set as a standard judgment!
This way, if you save a large number of user ids, will be more troublesome! Our purpose is to count, not save user ids;
0.81% error rate! Statistical UV task, can be ignored!
Test using
127.0.0.1:6379> PFadd mykey a b c d e f g h i j Create the first set of elements mykey
(integer) 1
127.0.0.1:6379> PFCOUNT mykey Count the number of cardinals for the mykey element
(integer) 10
127.0.0.1:6379> PFadd mykey2 i j z x c v b n m Create a second group of elements mykey2
(integer) 1
127.0.0.1:6379> PFCOUNT mykey2
(integer) 9
127.0.0.1:6379> PFMERGE mykey3 mykey mykey2 Mykey mykey2 => mykey3OK 127.0.0.1:6379 > PFCOUNT mykey3# Look at the number of unions!
(integer15)Copy the code
If fault tolerance is allowed, then Hyperloglog must be used!
If fault tolerance is not allowed, just use set or your own data type!
4.3, the Bitmap
Why don’t other tutorials like to cover this? These in life or development, there are a lot of application scenarios, learning, is one more idea!
Technology is not pressure body!
A storage
Statistics user information, active, inactive! Logged in, not logged in! Punch in, 365 punch in! You can use Bitmaps for both!
Bitmap, data structure! Both operate on binary bits to record, there are only 0 and 1 states!
365 days = 365 bits 1 byte = 8bits 46 bytes!
test
Use bitmap to record your clocking from Monday to Sunday!
Monday: 1 Tuesday: 0 Wednesday: 0 Thursday: 1……
Check to see if you clocked in on a particular day!
127.0.0.1:6379> getbit sign 3
(integer) 1
127.0.0.1:6379> getbit sign 6
(integer) 0
Copy the code
Statistical operation, statistics of the number of days!
127.0.0.1:6379 > bitcount signCount this week's clocking record, you can see whether there is full attendance!
(integer) 3
Copy the code
5, transaction
Redis transaction essence: a collection of commands! All commands in a transaction are serialized and executed in sequence during the execution of the transaction!
One-time, sequential, exclusive! Execute a list of commands!
------ queue set Set Set Run ------Copy the code
There is no concept of Redis transactions without isolation levels!
All commands are in the transaction and are not executed directly! This command is executed only when the command is initiated! ExecRedis single-command to preserve atomicity, but transactions do not guarantee atomicity!
Redis transactions:
- Start transaction (multi)
- Order in (……)
- Execute transaction (exec)
Normal transaction execution!
127.0.0.1:6379 > multi# start transaction
OK
Enter the queue127.0.0.1:6379 >setK1 v1 QUEUED 127.0.0.1:6379 >setK2 v2 QUEUED 127.0.0.1:6379> get k2 QUEUED 127.0.0.1:6379>setK3 v3 QUEUED 127.0.0.1:6379 >exec # execute transaction1) OK 2) OK 3) " v2" 4) OKCopy the code
Abandon business!
127.0.0.1:6379 > multi# start transactionOK 127.0.0.1:6379 >setK1 v1 QUEUED 127.0.0.1:6379 >setK2 v2 QUEUED 127.0.0.1:6379 >setK4 V4 QUEUED 127.0.0.1:6379> DISCARD# cancel transactionOK 127.0.0.1:6379 > get k4None of the commands in the transaction queue will be executed!
(nil)
Copy the code
Compiled exceptions (there is a problem with the code! Wrong command! All commands in the transaction will not be executed!
127.0.0.1:6379 > multi OK 127.0.0.1:6379 >setK1 v1 QUEUED 127.0.0.1:6379 >setK2 v2 QUEUED 127.0.0.1:6379 >setK3 v3 QUEUED 127.0.0.1:6379> getSet K3Wrong command
(error) ERR wrong number of arguments for &# 39; getset' command127.0.0.1:6379 >setK4 v4 QUEUED 127.0.0.1:6379 >setK5 v5 QUEUED 127.0.0.1:6379 >exec Transaction execution error!
(error) EXECABORT Transaction discarded because of previous errors.
127.0.0.1:6379> get k5 # All commands will not be executed!
(nil)
Copy the code
Run-time exception (1/0), if there is syntax in the transaction queue, then other commands can be executed normally when executing the command, error command throws an exception!
127.0.0.1:6379 >setk1 " v1" OK 127.0.0.1:6379> multi OK 127.0.0.1:6379> INCr k1# fail to execute!QUEUED 127.0.0.1:6379 >setK2 v2 QUEUED 127.0.0.1:6379 >setK3 v3 QUEUED 127.0.0.1:6379> get K3 QUEUED 127.0.0.1:6379>exec
1) (error) ERR value is not an integer or out of range The first command failed, but it was executed successfully.2) OK 3) OK 4) " v3" 127.0.0.1:6379 > get k2 & quot; v2" 127.0.0.1:6379 > get k3 & quot; v3"Copy the code
Monitor! Watch (often asked in interviews!)
Pessimistic locks:
- Very pessimistic, think that any time will be a problem, no matter what will be locked!
Optimistic locking:
- Very optimistic, think no problem at any time, so will not lock! When you update the data to determine if anyone has changed the data in the meantime,
- Access to the version
- Compare version when updating
Redis tests monitor tests
Normal execution succeeded!
127.0.0.1:6379 >setMoney 100 OK 127.0.0.1:6379 >setOut 0 OK 127.0.0.1:6379> Watch money# Monitor money objectsOK 127.0.0.1:6379 > multiThe transaction completes normally, and no changes have occurred during the data period.
OK
127.0.0.1:6379> DECRBY money 20
QUEUED
127.0.0.1:6379> INCRBY out 20
QUEUED
127.0.0.1:6379> exec(1)integer), 80 (2)integer20)Copy the code
Test multithreaded modification value, using Watch can be used as redis optimistic lock operation!
127.0.0.1:6379 > watch money# monitoring money
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> DECRBY money 10
QUEUED
127.0.0.1:6379> INCRBY out 10
QUEUED
127.0.0.1:6379> exec Before the transaction is executed, another thread changes our value, which causes the transaction to fail!
(nil)
Copy the code
If the modification fails, just get the latest value
6. Redis
When it’s time to start, start from the configuration file!
In the work, some small configuration, can let you stand out!
An expert knows it when he makes a move
unit
Configuration file unit is case-insensitive!
contains
Spring, Improt, include
network
GENERAL GENERAL
The snapshot
Persist, how many operations are performed within a specified period of time, will persist to file.rdb.aof
Redis is an in-memory database, if not persistent, then data power and loss!
REPLICATION, which we will cover later when we cover master-slave REPLICATION
SECURITY safety
You can set the password of Redis here, default is no password!
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> config get requirepass Get the password of redis1) " requirepass" 2) " " 127.0.0.1:6379 > configsetrequirepass " 123456"# set redis password
OK
127.0.0.1:6379> config get requirepass No permissions for all commands
(error) NOAUTH Authentication required.
127.0.0.1:6379> ping
(error) NOAUTH Authentication required.
127.0.0.1:6379> auth 123456 Use password to log in!OK 127.0.0.1:6379> config get requirepass 1) ". requirepass" 2) " 123456"Copy the code
Limit the CLIENTS
maxclients 10000 # set the maximum number of clients that can connect to Redismaxmemory < bytes># redis configures maximum memory capacity
maxmemory-policy noeviction The processing strategy after the memory reaches the upper limitVolatile - lRU: lRU only for keys whose expiration time is set (default value) 2. Allkeys-lru: delete keys of lRU algorithm 3. Volatile -random: delete keys that are about to expire randomly 4. Volatile - TTL: delete items that are about to expire. Noeviction: never expire, return errorCopy the code
APPEND ONLY Mode aOF configuration
Specific configuration, we in Redis persistence to give you a detailed explanation!
7. Redis publishes subscriptions
Redis publish subscription (PUB/SUB) is a message communication model: the sender (PUB) sends the message and the subscriber (sub) receives the message. Wechat, Weibo, attention system!
The Redis client can subscribe to any number of channels.
Subscribe/Publish message graph:
First: message sender, second: channel, third: message subscriber!
The following figure shows channel Channel1 and the relationship between the three clients that subscribe to this channel — Client2, Client5, and Client1:
When a new message is sent to channel Channel1 via PUBLISH, the message is sent to the three clients that subscribed to it:
7.1, the command
These commands are widely used to build instant messaging applications, such as chatroom and live broadcasts and alerts.
7.2, tests,
Subscribe to the end:
127.0.0.1:6379 > SUBSCRIBE kuangshenshuoSubscribe to a channel kuangshenshuoReading messages... (press Ctrl-C to quit) 1) " subscribe" 2) " kuangshenshuo" (3)integer1)Wait to read the push message1) " message"# message2) " kuangshenshuo"# That channel3) " hello,kuangshen"# The specific content of the message1) " message" 2) " kuangshenshuo" 3) " hello,redis"Copy the code
The sender:
127.0.0.1:6379 > PUBLISH kuangshenshuo & quot; hello,kuangshen"# publishers post messages to channels!
(integer) 1 127.0.0.1:6379> PUBLISH kuangshenshuo "PUBLISH kuangshenshuo" hello,redis"# publishers post messages to channels!
(integer6379 > 1 127.0.0.1) :Copy the code
Principle of 7.3,
Redis is realized by USING C. By analyzing the pubsub. C file in Redis source code, we can understand the underlying implementation of the publishing and subscription mechanism and deepen our understanding of Redis.
Redis implements PUBLISH and SUBSCRIBE through commands such as PUBLISH, SUBSCRIBE, and PSUBSCRIBE.
WeChat:
A dictionary is maintained in redis-server after subscribing to a channel with the SUBSCRIBE command. The key of the dictionary is a channel! , and the dictionary value is a linked list of all clients that subscribe to the channel. The key to the SUBSCRIBE command is to add a client to the subscription list for a given channel.
Using a PUBLISH command to send a message to a subscriber, redis-server uses the given channel as a key, looks up a list of clients that have subscribed to the channel in the channel dictionary it maintains, iterates through the list, and publishes the message to all subscribers.
Pub/Sub: Publish and Subscribe. In Redis, you can Publish and Subscribe to a key value. When a key value is published, all clients that Subscribe to it will receive the corresponding message. The most obvious use of this feature is as a real-time messaging system, such as normal im, group chat, etc.
Usage scenarios: 1, real-time message system! 2, facts chat! (The channel is a chat room, and the message is displayed to everyone!) 3, subscribe, pay attention to the system is ok!
For slightly more complex scenarios we use messaging middleware MQ
Redis primary/secondary replication
8.1, concepts,
Primary/secondary replication refers to the replication of data from one Redis server to other Redis servers. The former is called the master node (master/leader) and the latter is called the slave node (slave/follower). The replication of data is one-way and can only go from the master node to the slave node. The Master mode is mainly written, while the Slave mode is mainly read.
By default, each Redis server is the primary node; And a master node can have multiple slave nodes (or none), but a slave node can only have one master node.
The primary and secondary replication functions include:
1. Data redundancy: Master/slave replication implements hot backup of data and is a data redundancy mode other than persistence.
2. Fault recovery: When a fault occurs on the primary node, the secondary node provides services to achieve rapid fault recovery. It’s actually redundancy of services.
3. Load balancing: on the basis of master/slave replication, with read/write separation, the master node can provide write service, and the slave node can provide read service (that is, when Redis data is written, the application connects to the master node and when Redis data is read, the application connects to the slave node) to share the load of the server. Especially in the scenario of less write and more read, the concurrency of the Redis server can be greatly increased by sharing the read load with multiple slave nodes.
High availability (clustering) cornerstone: In addition to the above functions, master-slave replication is the foundation upon which sentry and clustering can be implemented, hence master-slave replication is the foundation of High availability in Redis.
Generally speaking, to apply Redis to engineering projects, only one Redis is not acceptable (down), for the following reasons:
1. Structurally, a single Redis server will have a single point of failure, and one server needs to handle all the request loads, resulting in great pressure;
2, from the capacity, a single Redis server memory capacity is limited, even if a Redis server memory capacity of 256G, also can not use all memory as Redis storage memory, generally speaking, a single Redis maximum use of memory should not exceed 20G.
Commodities on e-commerce websites are generally uploaded once and browned for countless times, which means “read more and write less”.
For this scenario, we can have the following architecture:
Master/slave replication, read/write separation! 80% of the time you’re reading! Relieve server stress! Often used in architecture! One master and two subordinate!
Master-slave replication is a must as long as you are in a company, because you cannot use Redis alone in a real project!
8.2. Environment Configuration
Configure only the slave library, not the master library!
127.0.0.1:6379 > info replicationView information about the current library
# Replication
role:master # character master
connected_slaves:0 # No slave machine
master_replid:b63c90e6c501143759cb0e7f450bd1eb0c70882a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
Copy the code
Make a copy of the three configuration files and modify the corresponding information: 1, port 2, PID name 3, log file name 4, dump. RDB name
After the modification, start our 3 Redis services, you can view through the process information!
8.3. One master and two subordinate
By default, each Redis server is the primary node; We usually only use configuration slave machine!
To recognize the boss! One master (79) and two slave (80,81)
127.0.0.1:6380 > SLAVEOF 127.0.0.1 6379# SLAVEOF host 6379OK 127.0.0.1:6380 > info replication# Replication
role:slave # The current role is slaveMaster_host: 127.0.0.1# You can see the host information
master_port:6379
master_link_status:up
master_last_io_seconds_ago:3
master_sync_in_progress:0
slave_repl_offset:14
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:a81be8dd257636b2d3e7a9f595e69d73ff03774e
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:14
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:14
# View in host!127.0.0.1:6379 > info replication# Replication
role:master
connected_slaves:1 # More slave configurationSlave0: IP = 127.0.0.1 port = 6380, state = online, offset = 42, lag = 1# More slave configuration
master_replid:a81be8dd257636b2d3e7a9f595e69d73ff03774e
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:42
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:42
Copy the code
If both are configured, there are two slave machines
The true master configuration should be in the configuration file, in which case it is permanent, we are using command here, temporary!
8.4 details,
The host can write, the slave machine can not write only read! All information and data in the host will be automatically saved from the machine!
Host:
Test: host disconnected, from the machine is still connected to the host, but there is no write operation, this time, the host if back, from the machine can still directly get the host write information!
If it is the use of command line, to configure the master and slave, this time if the restart, will change back to the host! As soon as it becomes slave, it will get the value from the host!
8.5 Principle of Replication
After connecting to the master successfully, the Slave sends a sync command
The Master receives the command to start the background saving process and collects all the commands received to modify the data set. After the background process is complete, the Master sends the entire data file to the slave and completes a complete synchronization.
** Full copy: ** The slave service saves and loads the database file data to the disk after receiving it.
** Incremental replication: ** The Master continues to transmit all the new collected change commands to the slave in turn to complete the synchronization
But as soon as the master is reconnected, a full synchronization (full copy) will be performed automatically! Our data must be visible from the machine!
Layer upon layer link
The last M links to the next S!
This time can also complete our master slave copy!
If there is no boss, can we choose a boss at this time? Manual!
Seeks the usurper
If the host is disconnected, we can use ==SLAVEOF no one== to make ourselves the host! Other nodes can then be manually connected to the latest master node (manually)! If the boss fixes it at this point, reconnect!
9. Sentinel mode
Refer to my last article on sentinel mode.
9.1 Complete configuration of Sentinel mode!
# Example sentinel.conf
# Sentinel instance runs on port 26379 by default
port 26379
# Sentinel's working directory
dir /tmp
IP port of the redis master node monitored by Sentinel
# master-name specifies the name of the master node. The name of the master node must contain only letters a-z and numbers 0-9. .-_" Composition.
# quorum configures how many sentinels agree that the master node is out of contact
# sentinel monitor < master-name> < ip> < redis-port> < quorum>Sentinel Monitor MyMaster 127.0.0.1 6379 2# When requirePass Foobared is enabled in the Redis instance, all clients connected to the Redis instance must provide the password
Note that the primary and secondary authentication passwords must be the same as the primary and secondary authentication passwords
# sentinel auth-pass < master-name> < password>
sentinel auth-pass mymaster MySUPER--secret-0123passw0rd
# specifies the number of milliseconds after the primary node does not answer sentinel sentinel at this point the sentinel subjectively considers the primary node offline by default 30 seconds
# sentinel down-after-milliseconds < master-name> < milliseconds>
sentinel down-after-milliseconds mymaster 30000
# This configuration item specifies the maximum number of slaves that can synchronize the new master at the same time during a failover.The smaller the number, the longer it takes to complete failover, but the larger the number, the more slaves are unavailable because of Replication. Setting this value to 1 ensures that only one slave at a time is unable to process command requests.# sentinel parallel-syncs < master-name> < numslaves>
sentinel parallel-syncs mymaster 1
Failover -timeout can be used in the following ways:
#1. The interval between two failover operations for the same sentinel and the same master.
#2. Start time counting when a slave synchronizes data from an incorrect master. Until the slave is corrected to synchronize data to the correct master.
#3. The time required to cancel an ongoing failover.
#4. Maximum time required to configure all Slaves to point to the new Master when performing failover. However, even after this timeout, slaves will still be configured correctly to point to the master, but not according to the rules configured for parallel Syncs
Three minutes by default
# sentinel failover-timeout < master-name> < milliseconds>
sentinel failover-timeout mymaster 180000
# SCRIPTS EXECUTION
# Configure the script that needs to be executed when an event occurs. You can use the script to notify the administrator, for example, send an email to inform related personnel when the system is not working properly.
The following rules apply to the results of a script:
If the script returns 1 after execution, the script will be executed again later. The current default is 10
If the script returns 2 after execution, or a value higher than 2, the script will not be executed again.
If the script is aborted during execution due to a system interrupt signal, it behaves the same as if the value is 1.
The maximum execution time of a script is 60 seconds. If this time is exceeded, the script will be terminated by a SIGKILL signal and executed again.
# Notification script: This script will be called when sentinel has any warning level events (such as subjective and objective failures of redis instances, etc.). In this case, this script should notify the system administrator of abnormal system operation via email, SMS, etc. When the script is called, it is passed two parameters, the type of the event and the description of the event. If the script path is configured in the sentinel.conf configuration file, the script must exist in the path and be executable, otherwise sentinel will not start successfully.
Notification script
# shell programming
# sentinel notification-script < master-name> < script-path>
sentinel notification-script mymaster /var/redis/notify.sh
The client reconfigures the master node parameter script
This script will be called when a master changes due to a failover, notifying the client that the master address has changed.
The following arguments will be passed to the script when it is called:
# < master-name> < role> < state> < from-ip> < from-port> < to-ip> < to-port>
# now & lt; The state always "failover" >,
# < Role > is one of "leader" or "observer".
The from-ip, from-port, to-ip, to-port parameters are used to communicate with the old master and new master(i.e. the old slave)
# This script should be generic and can be called multiple times, not specific.
# sentinel client-reconfig-script < master-name> < script-path>
sentinel client-reconfig-script mymaster /var/redis/reconfig.sh # Are generally configured by operation and maintenance!
Copy the code
Redis cache penetration and avalanche
High availability of the service!
Here we don’t go into the details of the bottom layer of the solution!
The use of Redis cache greatly improves the performance and efficiency of applications, especially data query. But at the same time, it also brings some problems. Among them, the most crucial problem, it is the consistent problem of data, tell from strict sense, this problem does not have solution. If the consistency of the data is high, you can’t use caching. Other typical problems are cache penetration, cache avalanche, and cache breakdown. At present, the industry also has a more popular solution.
13.1 cache Penetration (not found)
concept
The concept of cache penetration is very simple. The user wants to query a data and finds that the Redis in-memory database is not there, that is, the cache is not hit, so they query the persistence layer database. Found no, so this query failed. Cache misses when there are many users (seckill!) , so they all request the persistence layer database. This puts a lot of strain on the persistence layer database, which is equivalent to cache penetration.
The solution
Bloom filter
A Bloom filter is a data structure that stores all possible query parameters in the form of hash. It is verified at the control layer first and discarded if the parameters do not meet the requirements, thus avoiding the query pressure on the underlying storage system.
Caching empty objects
When the storage layer is not matched, even the returned empty object will be cached, and an expiration time will be set. After accessing this data, it will be obtained from the cache, protecting the back-end data source.
But there are two problems with this approach:
1. If null values can be cached, this means that the cache needs more space to store more keys, because there may be a lot of null keys;
2. Even if the expiration time is set for null values, data at the cache layer and storage layer will be inconsistent for a period of time, which will affect services that need to maintain consistency.
13.2 cache breakdown (too much cache expires!)
An overview of the
Here need to pay attention to the breakdown of the cache and the difference, breakdown, the cache is a key is very hot, constantly carrying big concurrent, big concurrent concentrated on a visit to this one point, when the key in the instant of failure, the continuous large concurrent will wear out the cache, request the database directly, as in a barrier on digging a hole.
When a key expires, a large number of concurrent requests are generated. Such data is usually hot data. Due to the expiration of the cache, the database is accessed to query the latest data and the cache is written back.
The solution
Set the hotspot data to never expire
From the cache level, the expiration time is not set, so there are no problems caused by hot key expiration.
Add a mutex
Distributed lock: The distributed lock ensures that only one thread queries back-end services for each key. Other threads do not have the permission to obtain the distributed lock, so they only need to wait. This approach shifts the pressure of high concurrency to distributed locks, so the challenge for distributed locks is great.
13.3 cache avalanche
concept
Cache avalanche is when a cache set expires during a certain period of time. Redis downtime!
One of the reasons for the avalanche, for example, at the time of writing this article, it will soon be double Twelve zero, a wave of buying, this wave of goods in a relatively concentrated time cache, let’s say one hour cache. So by one o ‘clock in the morning, the cache of these items will have expired. And the access to this batch of goods, all fell on the database, for the database, will produce periodic pressure peaks. Then all the requests will reach the storage layer, and the amount of calls to the storage layer will explode, causing the storage layer to also fail.
In fact, centralized expiration is not so deadly, but a more deadly cache avalanche is when a node of the cache server goes down or is disconnected. Because of the natural cache avalanche, the cache must be created at a time when the database can withstand the pressure. It’s just a periodic strain on the database. The breakdown of the cache service node, the pressure on the database server is unpredictable, it is likely to crush the database in an instant.
The solution
High availability of redis
This idea means that since redis may fail, I will add more Redis, so that the others can continue to work after the failure of one, which is actually a cluster built. (Live long distance!)
Current limiting the drop
The idea behind this solution is to control the number of threads that read the database write cache by locking or queuing after the cache is invalidated. For example, only one thread is allowed to query data and write to the cache for a key, while the other threads wait.
Data preheating
The idea of data heating is that I pre-access all possible data prior to deployment, so that some of the potentially heavily accessed data is loaded into the cache. Before large concurrent access occurs, manually trigger the loading of different cache keys, set different expiration times, and make the cache failure time as even as possible.