01. Introduction to NoSQL – part 1
1. Great opportunities in the Internet era, why NoSQL
The good old days of standalone MySQL
In the ’90s, the traffic to a website was so small that it was easy to handle a single database. At that time, there were more static web pages than dynamic interactive types of sites.
DAL DAL is an acronym for Data Access Layer.
Under the above architecture, what is the bottleneck of data storage?
- The total amount of data that can’t fit in a machine
- Index of data (B+ Tree) When there is no more space in a machine
- Traffic (mixed read/write) an instance cannot withstand
If the above 1OR3 are satisfied, evolution…
2.Memcached +MySQL+ vertical split
Later, with the increase of traffic, almost all sites using MySQL architecture began to have performance problems in the database, web applications are no longer just focused on function, but also in pursuit of performance. Programmers began to use caching technology to relieve the pressure of the database, optimize the structure and index of the database. At the beginning, file caching is popular to relieve the database pressure, but when the traffic continues to increase, multiple Web machines can not share through the file cache, a large number of small file cache also brought a relatively high IO pressure. At this point, Memcached naturally becomes a very fashionable technology product.
3. The primary and secondary reads and writes of Mysql are separated
As database write pressure increases, Memcached only relieves database read pressure. Most websites start to use master-slave replication technology to achieve read/write separation in order to improve read/write performance and scalability of the read/write library. Mysql’s master-slave mode became standard on the site at this time.
4. Split table and library + horizontal split +mysql cluster
On the basis of Memcached cache, MySQL master-slave replication, read/write separation, the write pressure of MySQL master library begins to bottleneck, and the data volume continues to surge. Because MyISAM uses table lock, there will be serious lock problems under high concurrency. A large number of high-concurrency MySQL applications are starting to use the InnoDB engine instead of MyISAM.
At the same time, it became popular to use separate tables and libraries to ease the write pressure and data growth expansion problems. At this point, dividing tables and libraries became a hot technology, a hot interview question and a hot technical question discussed in the industry. It was around this time that MySQL came out with a table partition that was still not stable, which gave hope to companies with modest technology. Although MySQL launched the MySQL Cluster Cluster, the performance can not well meet the requirements of the Internet, but provides a very large guarantee of high reliability.
5.MySQL scalability bottleneck
MySQL database also often stores some large text fields, resulting in a very large database table, which results in very slow database recovery, and it is not easy to quickly restore the database. For example, 10 million 4KB of text is close to 40GB of text. If you could leave this data out of MySQL, MySQL would become very small. Relational databases are powerful, but they can’t handle all application scenarios well. MySQL’s poor scalability (complex technology is required to implement), big data IO pressure, difficult to change table structure, is currently using MySOL developers are facing problems.
6. What’s it like today? ?
7. Why NoSQL
Today, we can easily access and capture data through third-party platforms (such as Google, Facebook, etc.). User profiles, social networks, geolocation, user-generated data and user logs have multiplied. If we want to mine these user data, the SQL database is not suitable for these applications, and the development of NoSQL database can deal with these large data very well.
2. What is
NoSQL(NoSQL = Not Only SQL), meaning “more than SQL”, refers to non-relational database. With the rise of the Internet web2.0 website, the traditional relational database has been unable to cope with the web2.0 website, especially the super large scale and high concurrency SNS type web2.0 pure dynamic website, exposing a lot of problems that are difficult to overcome, while the non-relational database has been very rapid development because of its own characteristics. NoSQL database is created to solve the challenges of large data sets and multiple data types, especially the problems of big data applications, including the storage of super-large data.
(Google or Facebook, for example, collect trillions of bits of data for their users every day). These types of data stores do not require fixed schemas and can scale horizontally without unnecessary operations.
3. Can do
Easy extension
There are many types of NoSQL databases, but one common feature is to remove the relational features of relational databases. There is no relationship between the data, which makes it easy to scale. It also brings scalability at the architectural level.
High performance with large data volume
NoSQL databases have very high read/write performance, especially under large data volumes.
This benefits from its irrelevance and simple database structure.
Generally, MySQL uses Query Cache, which is invalidated every time a table is updated. It is a large-granularity Cache, and the Cache performance is not high in applications with frequent web2.0 interactions.
NoSQL’s Cache is record level, which is a fine-grained Cache, so NoSQL’s performance is much higher at this level.
Diverse and flexible data models
NoSQL does not need to create fields for the data to be stored in advance and can store custom data formats at any time.
In relational databases, adding and deleting fields is a very troublesome thing. If you have a very large table, adding fields is a nightmare.
Traditional RDBMS VS NOSQL
RDBMS
- Highly organized structured data
- Structured Query Language (SQL)
- Data and relationships are stored in separate tables
- Data manipulation language, data definition language
- Strict consistency
- Based on the transaction
NoSQL
- Stands for more than just SQL
- There is no declarative query language
- There are no predefined patterns
- Key-value pair storage, column storage, document storage, graph database
- Final consistency, not ACID properties
- Unstructured and unpredictable data:
- The CAP theorem
- High performance, high availability and scalability
4. NoSQL available
- Redis
- Memcached
- MongDB
5. How to play
- KV
- Cache
- Persistence
02. Introduction to NoSQL – next
3 v + 3 high
3V in the Era of Big Data:
- Mass Volume
- Multiple segments
- Real-time Velocity
Internet demand 3 high:
- High concurrency
- High can be enclosed
- A high performance
03. Introduction to NoSQL application scenarios
SQL and NoSQL are combined
How to store commodity information of Alibaba Chinese station
Look at alibaba’s Chinese website home page to women’s/women’s bags as an example
Structure development history:
- The evolution process
- The fifth generation
- Generation 5 architecture mission
Multi-data source type storage issues that are relevant to us
Storage scheme of commodity information
- Basic Commodity Information
- Name, price, date of manufacture, manufacturer, etc
- Mysql/Oracle relational database, Taobao is currently de-o (that is, remove oracle), note, Taobao internal use of mysql is the inside of the big cow’s own transformation
- Why go to IOE (in the IT construction process, remove IBM minicomputers, Oracle databases and EMC storage devices) in short, you don’t have to dance in anklets.
- Product description, details, evaluation information (multi-text)
- The I/O read and write performance deteriorates
- Document database MongDB
- Pictures of goods
- Commodity picture display category
- Distributed file system
- Taobao own TFS
- Google’s GFS
- Hadoop的HDFS
- The keyword of the product
- Clean out treasure their own
- ISearch
- High-frequency information about a product (e.g., chocolates for Valentine’s Day)
- In-memory database
- Tair, Redis, Memcache
- Commodity trading, price calculation, accumulative points
- External system, external third party payment interface
- Alipay
Summarize the difficulties and solutions of large-scale Internet applications (big data, high concurrency, diverse data types)
-
The difficulties in
- Data type diversity
- Data source diversity and variation refactoring
- Data source transformation and data service platform does not require extensive reconstruction
-
The solution
-
EAI
-
UDSL Unified data platform service layer
-
What is the
-
What kind of
- mapping
- API
- Hot cache
-
-
04. Introduction to NoSQL data model
An e-commerce customer, order, order, address model to compare the relational database and non-relational database
- How to design traditional relational database
- ER diagram (1:1, 1: N, N: 1) Primary foreign key, etc
- How is NOSQL designed
- Binary JSON BSON () is a Binary form of JSON storage format, short for Binary JSON, which, like JSON, supports embedded document objects and array objects
- The contrast, the problem and the difficulty
- Problems and Difficulties
- Why the aggregation model
- Associated queries are not recommended for highly concurrent operations, and Internet companies use redundant data to avoid associated queries
- Distributed transactions do not support much concurrency
- Why the aggregation model
- Problems and Difficulties
The aggregation model
- KV
- BSON
- Column family
- As the name implies, data is stored in columns. The biggest feature is easy to store structured and semi-structured data, easy to do data compression, for a certain column or several columns of the query has a very large I/O advantage.
- graphics
05. Four categories of NoSQL database
- KV
- Sina: BerkeleyDB + Redis
- Meituan: Redis + tair
- Alibaba, Baidu: memcache + Redis
- Document database (more BSON format)
- CouchDB
- MongoDB
- MongoDB is a database based on distributed file storage. Written in C++ language. Designed to provide scalable high-performance data storage solutions for WEB applications.
- MongoDB is a product between relational database and non-relational database. Among non-relational databases, it has the most rich functions and is the most like relational database.
- Column storage database
- Cassandra, HBase
- Distributed file system
- Graph relational database
- It’s not about graphics, it’s about relationships such as: moments, social networks, AD recommendation systems
- Social networking, recommendation systems. Focus on building the relationship map
- Secondary, InfoGrid
- Four contrast
06. Distributed database CAP principle
What is conventional ACID
- A (Atomicity
- C (Consistency) Consistency
- I (Isolation) Independence
- D (Durability)
A relational database follows the ACID rule. A transaction is a transaction, similar to a real world transaction. It has four characteristics:
A: Atomicity is easy to understand. It means that all operations in A transaction are either done or not done. The condition for A transaction to succeed is that all operations in the transaction are successful. For example, in bank transfer, 100 yuan is transferred from account A to account B. There are two steps: 1) Withdraw 100 yuan from account A; 2) Deposit 100 yuan into account B. These two steps either complete together, or do not complete together, if only complete the first step, the second step failed, the money will be puzzling less 100 yuan.
C (Consistency) Consistency is also easy to understand, that is, the database must always be consistent, transaction running does not change the original Consistency constraints of the database.
The so-called independence means that the concurrent transactions will not affect each other. If the data to be accessed by one transaction is being modified by another transaction, the data accessed by the other transaction will not be affected by the uncommitted transaction as long as the other transaction is not committed. For example, there is A transaction that transfers 100 yuan from account A to account B. Under the condition that the transaction is not completed, if B queries his own account at this moment, he cannot see the newly added 100 yuan
4, D (Durability) Durability means that once a transaction is committed, its modifications are kept in the database forever and will not be lost even if downtime occurs.
CAP
- C:Consistency
- A:Availability
- P:Partition tolerance
CAP theory says that in a distributed storage system, only two of the above can be implemented.
Since the current network hardware is bound to have problems such as delayed packet loss, partition tolerance is something we must implement. So there is a trade-off between consistency and availability, and no NoSQL system can guarantee all three.
- CA Traditional Oracle database
- AP’s choice of most website architectures
- CP Redis, mongo
Note: Trade-offs must be made when working with distributed architectures.
Strike a balance between consistency and availability. More than most Web applications, strong consistency is not really required. So sacrifice C for P, which is the current direction of distributed database products.
Consistency versus usability
Many of the key features of relational databases are often unusable for web2.0 sites
Database transaction consistency requirements
Many Web real-time systems do not require strict database transactions, have low requirements for read consistency, and in some cases do not require high requirements for write consistency. Allow for final consistency.
Real-time requirement of database write and read
For relational databases, immediately after insert a data query, is certainly can read out the data, but for many web applications, does not require such a high real-time property, for example after weibo send a message, and even more than ten seconds after a few seconds, I didn’t see the dynamic of subscribers is perfectly acceptable.
The need for complex SQL queries, especially multi-table associative queries
Any web system with a large amount of data is very taboo to associated query of multiple large tables, as well as complex data analysis type report query, especially SNS type website, from the requirements and product design Angle, this situation can be avoided. Often more is a single table of primary key query, as well as a single table of simple conditional paging query, SQL function has been greatly weakened.
Classic CAP figure
The core of CAP theory is that it is impossible for a distributed system to satisfy the three requirements of consistency, availability and fault tolerance of partition at the same time, but only two at most.
Therefore, according to the CAP principle, NoSQL database can be divided into three categories that meet the CA principle, CP principle and AP principle:
- CA – A single point cluster, a system that satisfies consistency, availability, and is generally not very powerful in scalability.
- CP – Meet the consistency, partition tolerance required system, usually not particularly high performance.
- AP – Systems that meet availability and partition tolerance may generally have lower requirements for consistency.
BASE
BASE is a solution to the problem of reduced availability caused by the strong consistency of distributed database.
BASE is an abbreviation of three terms:
- Basically Available
- Soft State
- Eventually consistent
The idea is that the overall scalability and performance of the system can be improved by relaxing the system’s demands for consistency at one point in time. Why do you say so? The reason is that it is impossible for large systems to use distributed transactions to complete these indicators because of the geographical distribution and extremely high performance requirements. In order to obtain these indicators, we must adopt another way to complete, and BASE is the solution to this problem
Overview of Distributed + Cluster
Distributed System
Consists of multiple computers and communicating software components connected over a computer network (local or wide area network). A distributed system is a software system built on a network. Because of the characteristics of software, distributed systems have a high degree of cohesion and transparency. As a result, the difference between networks and distributed systems is more about high-level software (especially operating systems) than hardware. Distributed system can be applied in different platforms such as PC, workstation, LAN and WAN.
To put it simply:
- Distributed: Different service modules (projects) are deployed on different servers. They communicate and invoke each other through Rpc/Rmi to provide services externally and collaborate within the group.
- Cluster: The same service modules are deployed on multiple servers. Distributed scheduling software is used to centrally schedule services and provide external access.
7. Install
Introductory overview
What is the
Redis:REmote DIctionary Server is a high performance (key/value) distributed memory database, which runs based on memory and supports persistent NoSQL database. It is written in C language and complies with BSD protocol. Is currently one of the most popular NoSql databases, also known as data structure server.
Redis and other key-value caching products have the following three characteristics:
- Redis supports persistent data, which can keep data in memory on disk and be reloaded for use upon restart
- Redis not only supports simple key-value type data, but also provides the storage of list, set, zset, hash and other data structures
- Redis supports data backup, namely, data backup in master-slave mode
Can do
- Memory storage and persistence: Redis supports asynchronous writing of data in memory to disk without affecting continued service
- The operation of fetching the latest N data, for example, you can put the ids of the latest 10 comments in the Redis List
- Emulating functions like HttpSession that require an expiration time
- Publish and subscribe messaging systems
- Timer, counter
Where is the
- Redis website
- Redis Chinese website
How to play
- Data types, basic operations, and configurations
- Persistence and replication, RDB/AOF
- Control of transactions
- Replication (master-slave relationship)
The installation of Redis
The Linux version installed
The default installation directory is /usr/local/bin
08.HelloWorld
- Single process
- A single-process model to handle client requests. The response to events such as reads and writes is done by wrapping the epoll function. The actual processing speed of Redis is entirely dependent on the execution efficiency of the main process
- Epoll is an enhanced version of select/ Poll, a multiplex IO interface for Linux, which is improved by the Linux kernel for handling a large number of file descriptors. It can significantly improve the system CPU utilization of programs with only a small amount of activity in a large number of concurrent connections.
- Default 16 databases, similar to the array table starting from zero, the initial default use of zero library, can be configured in the configuration file
select
Switching a databasedbsize
View the number of keys in the current databaseflushdb
: Clears the current libraryflushall
; Kill all libraries- Unified password management, 16 libraries are the same password, or all OK or one also can not connect
- Redis indexes are all built from scratch
- Why is the default port 6379
10. Introduction to five commonly used data types
Five data types of Redis
String (String)
- String is the most basic redis type. You can think of it as Memcached, with each key corresponding to a value.
- The string type is binary safe. The redis string can contain any data. Like JPG images or serialized objects.
- The string type is the most basic Redis data type. A Redis string value can be 512 MB at most
Hash (similar to a Map in Java)
- Redis Hash is a collection of key-value pairs.
- Redis hash is a mapping table of fields and values of string type. Hash is especially suitable for storing objects.
- Map
,object>
List (List)
- A Redis list is a simple list of strings, sorted by insertion order. You can add an element to the head (left) or tail (right) of the list.
- It’s actually a linked list underneath
Set
- Redis’ Set is an unordered collection of type string. It is implemented through the HashTable implementation
Sorted set Zset(sorted set)
- Redis zset, like set, is a collection of string elements and does not allow duplicate members.
- The difference is that each element is associated with a double score.
- Redis uses scores to sort the members of a collection from smallest to largest. Members of a Zset are unique, but scores can be repeated.
- Where do I get redis common data type manipulation commands
- Redis command reference
- Redis official website command reference
11. The Key Key
The commonly used
The command | describe |
---|---|
DEL key | This command is used to delete a key when it exists. |
DUMP key | Serializes the given key and returns the serialized value. |
EXISTS key | Checks whether the given key exists. |
EXPIRE key seconds | Sets the expiration time, in seconds, for a given key. |
EXPIREAT key timestamp | EXPIREAT is similar to EXPIRE in that it is used to set an expiration time for a key. The difference is that the EXPIREAT command accepts a UNIX timestamp as the time parameter. |
PEXPIRE key milliseconds | Sets the expiration time of the key in milliseconds. |
PEXPIREAT key milliseconds-timestamp | Set the timestamp (Unix timestamp) of the key expiration time in milliseconds |
KEYS pattern | Find all keys that match the given pattern. |
MOVE key db | Moves the key of the current database to the given database DB. |
PERSIST key | When you remove the expiration time of a key, the key persists. |
PTTL key | Returns the remaining expiration time of the key in milliseconds. |
TTL key | Returns the TTL (time to live) of a given key, in seconds. |
RANDOMKEY | Returns a random key from the current database. |
RENAME key newkey | Change the name of the key |
RENAMENX key newkey | Change the name of the key to newkey only when newkey does not exist. |
SCAN cursor [MATCH pattern] [COUNT count] | Iterate over the database keys in the database. |
TYPE key | Returns the type of the value stored by key. |
case
- keys *
- Exists Indicates the name of a key, which determines whether a key exists
- Db — > db — > db — > db — > db — > db — > db
- Expire Key seconds: Sets the expiration time for a given key
- TTL key Checks how many seconds remain to expire. -1 indicates that it will never expire, and -2 indicates that it has expired
- What type of key is your key
12.String
Single value and the value
The commonly used
The command | describe |
---|---|
SET key value | Sets the value of the specified key |
GET key | Gets the value of the specified key. |
GETRANGE key start end | Returns a subcharacter of the string value in key |
GETSET key value | Sets the value of the given key to value and returns the old value of the key. |
GETBIT key offset | Gets the bit at the specified offset for the string value stored by key. |
MGET key1 [key2…]. | Gets the values of all (one or more) of the given keys. |
SETBIT key offset value | Sets or clears the bits at the specified offset for the string value held by key. |
SETEX key seconds value | Associate the value value with the key and set the expiration time of the key to seconds. |
SETNX key value | Set the key value only if the key does not exist. |
SETRANGE key offset value | Overwrites the string value stored for the given key with the value argument, starting at offset. |
STRLEN key | Returns the length of the string value stored by key. |
MSET key value [key value… | Set one or more key-value pairs at the same time. |
MSETNX key value [Key value… | Sets one or more key-value pairs simultaneously if and only if none of the given keys exist. |
PSETEX key milliseconds value | This command is similar to the SETEX command, but it sets the lifetime of the key in milliseconds, rather than seconds, as the SETEX command does. |
INCR key | Increment the value of the number stored in the key by one. |
INCRBY key increment | Add the value stored in the key to the given increment. |
INCRBYFLOAT key increment | Add the value stored by key to the given floating point increment. |
DECR key | Subtract the number stored in the key by one. |
DECRBY key decrement | Values stored by key minus the given decrement value. |
APPEND key value | If the key already exists and is a string, the APPEND command appends the specified value to the end of the key’s original value. |
case
- set/get/del/append/strlen
- Incr/decr/incrby/decrby, must be a number to add or subtract
- Getrange /setrange intercepts a string in a specified range
- Setex (set with expire) keysecond value /setnx(set if not exist)
- Mset /mget/msetnx processes multiple K-V pairs at once
- Getset getSet getSet
13.List
Single value more than the value
The commonly used
The command | describe |
---|---|
BLPOP key1 [key2 ] timeout | Removes and retrieves the first element of the list. If there are no elements in the list, the list is blocked until the wait times out or an eject element is found. |
BRPOP key1 [key2 ] timeout | Removes and retrieves the last element of the list. If there are no elements in the list, the list is blocked until the wait times out or an eject element is found. |
BRPOPLPUSH source destination timeout | Pops a value from a list, inserts the pop-up element into another list and returns it; If the list has no elements, it blocks until the wait times out or a popup element is found. |
LINDEX key index | Gets the elements in the list by index |
LINSERT key BEFORE/AFTER pivot value | Inserts an element before or after the element in the list |
LLEN key | Get the list length |
LPOP key | Removes and gets the first element of the list |
LPUSH key value1 [value2] | Inserts one or more values into the list header |
LPUSHX key value | Inserts a value into the head of an existing list |
LRANGE key start stop | Gets the elements in the specified range of the list |
LREM key count value | Remove list elements |
LSET key index value | Set the value of a list element by index |
LTRIM key start stop | To trim a list, that is, to keep only elements within a specified range, and to remove all elements that are not within a specified range. |
RPOP key | Removes the last element in the list, returning the removed element. |
RPOPLPUSH source destination | Removes the last element of the list and adds it to another list and returns |
RPUSH key value1 [value2] | Adds one or more values to the list |
RPUSHX key value | Add a value to an existing list |
case
- lpush/rpush
- lpop/rpop/lrange
- Lindex, get elements by index subscript (top to bottom)
- llen
- Delete N values for the LREM key
- Ltrim Key Start index End index. Truncate a specified range of values before assigning a value to the key
- Rpoplpush Source list Destination list
- lset key index value
- Linsert key before/after value 1 value 2
Performance summary:
- It is a linked list of strings, left and right can be inserted to add;
- If the key does not exist, create a new linked list;
- If the key already exists, add the content.
- If the values are removed, the corresponding keys are gone.
- Linked lists are extremely efficient both at the beginning and the end, but are notoriously inefficient when operating on intermediate elements.
14.Set
Single value more than the value
The commonly used
The command | describe |
---|---|
SADD key member1 [member2] | Adds one or more members to a collection |
SCARD key | Gets the number of members of the collection |
SDIFF key1 [key2] | Returns the difference set of all sets given |
SDIFFSTORE destination key1 [key2] | Returns the difference set for a given collection and stores it in destination |
SINTER key1 [key2] | Returns the intersection of all sets given |
SINTERSTORE destination key1 [key2] | Returns the intersection of all the given collections and stores them in destination |
SISMEMBER key member | Check whether the member element is a member of the collection key |
SMEMBERS key | Returns all members of the collection |
SMOVE source destination member | Move the member element from the source collection to the Destination collection |
SPOP key | Removes and returns a random element from the collection |
SRANDMEMBER key [count] | Returns one or more random numbers in a collection |
SREM key member1 [member2] | Removes one or more members of a collection |
SUNION key1 [key2] | Returns the union of all given sets |
SUNIONSTORE destination key1 [key2] | The union of all given collections is stored in the Destination collection |
SSCAN key cursor [MATCH pattern] [COUNT count] | Iterate over the elements in the collection |
case
- sadd/smembers/sismember
- Scard, gets the number of elements in the set
- Srem key value Deletes elements from the collection
- Srandmember key Specifies an integer.
- Spop key is randomly removed from the stack
- Smove key1 key2 A value in key1 assigns a value from key1 to key2
- Mathematical set class
- Difference set: sdiff
- Intersection, sinter
- And set: sunion
15.Hash
KV mode stays the same, but V is a key-value pair
The commonly used
The command | describe |
---|---|
HDEL key field1 [field2] | Deletes one or more hash table fields |
HEXISTS key field | Check whether the specified field in the hash table key exists. |
HGET key field | Gets the value of the specified field stored in the hash table. |
HGETALL key | Gets all the fields and values of the specified key in the hash table |
HINCRBY key field increment | Add increment to the integer value of the specified field in the hash table key. |
HINCRBYFLOAT key field increment | Increments the floating point value of the specified field in the hash table key. |
HKEYS key | Gets all fields in the hash table |
HLEN key | Gets the number of fields in the hash table |
HMGET key field1 [field2] | Gets the values of all given fields |
HMSET key field1 value1 [field2 value2 ] | Set multiple field-value pairs into the hash key at the same time. |
HSET key field value | Set the value of field in hash table key to value. |
HSETNX key field value | Set the value of the hash table field only if the field field does not exist. |
HVALS key | Gets all values in the hash table. |
HSCAN key cursor [MATCH pattern] [COUNT count] | Iterates key-value pairs in a hash table. |
case
- hset/hget/hmset/hmget/hgetall/hdel
- hlen
- Hexists Key Indicates the key of a value in the key
- hkeys/hvals
- hincrby/hincrbyfloat
- hsetnx
16.ZSet
Each element has a double score, sorted from smallest to largest
Add a score value to set. Before, set was k1, v1, v2, v3, now zset is k1, score1, v1, score2, v2
The commonly used
The command | describe |
---|---|
ZADD key score1 member1 [score2 member2] | Adds one or more members to an ordered collection, or updates the scores of existing members |
ZCARD key | Gets the number of members of an ordered collection |
ZCOUNT key min max | Computes the number of members in an ordered set with a specified interval fraction |
ZINCRBY key increment member | Increment the score of a specified member in an ordered set |
ZINTERSTORE destination numkeys key [key… | Computes the intersection of one or more ordered sets given and stores the result set in a new ordered set key |
ZLEXCOUNT key min max | Computes the number of members in the specified dictionary range in an ordered collection |
ZRANGE key start stop [WITHSCORES] | Returns an ordered collection of members within a specified interval by indexing the interval |
ZRANGEBYLEX key min max [LIMIT offset count] | Returns a member of an ordered collection through a dictionary interval |
ZRANGEBYSCORE key min max [WITHSCORES] [LIMIT] | Returns an ordered set of members within a specified interval by a fraction |
ZRANK key member | Returns the index of the specified member in the ordered collection |
ZREM key member [member… | Removes one or more members of an ordered collection |
ZREMRANGEBYLEX key min max | Removes all members of the given dictionary range from the ordered collection |
ZREMRANGEBYRANK key start stop | Removes all members of the given rank range from the ordered collection |
ZREMRANGEBYSCORE key min max | Removes all members of the given fractional interval from the ordered set |
ZREVRANGE key start stop [WITHSCORES] | Returns the members of the ordered set within the specified interval, with the score from highest to lowest by index |
ZREVRANGEBYSCORE key max min [WITHSCORES] | Returns the members of the ordered set within the specified range of scores, sorted from highest to lowest |
ZREVRANK key member | Returns the ranking of the specified members of an ordered set, ordered in decreasing order (from largest to smallest) by score value |
ZSCORE key member | Returns the score value of a member in an ordered set |
ZUNIONSTORE destination numkeys key [key…] | Computes the union of a given one or more ordered sets and stores it in a new key |
ZSCAN key cursor [MATCH pattern] [COUNT count] | Iterating over elements in an ordered set (including element members and element scores) |
case
- zadd/zrange
- Withscores
- Zrangebyscore Key Start Score End score
- withscores
- (do not contain
- Limit returns a Limit
- Limit How many subscript steps to start
- Zrem key Value corresponding to a score, which is used to delete elements
- Zcard /zcount key Score /zrank key values, which is used to obtain the subscript value /zscore key corresponding value and obtain the score
- Zrevrank key values, used to obtain the subscript value in reverse order
- zrevrange
- Zrevrangebyscore Key End Score Start score
17. Configuration file introduction
The Redis configuration file is located in the Redis installation directory. The file name is Redis. Conf (for Windows, the file name is Redis.
You can view or set configuration items through the CONFIG command.
grammar
The Redis CONFIG command is in the following format:
Redis 127.0.0.1:6379> CONFIG GET CONFIG_SETTING_NAME 1Copy the code
The instance
Redis 127.0.0.1:6379> CONFIG GET loglevel 1) "loglevel" 2) "notice" 1234Copy the code
Parameters that
Conf configuration items are described as follows:
Note that the following configuration items are not complete, check out redis.conf for more information.
The serial number | Configuration items | instructions |
---|---|---|
1 | daemonize no |
Redis does not run as a daemon by default. This configuration item can be modified by using yes to enable daemons (Windows does not support daemons as no). |
2 | pidfile /var/run/redis.pid |
Pid is written to the /var/run/redis.pid file by default when Redis is running as a daemon, which can be specified by pidfile |
3 | port 6379 |
The default port is 6379. In my blog post, I explained why I chose 6379 as the default port, because 6379 corresponds to MERZ on the phone keys, and MERZ is named after The Italian singer Alessia MERZ |
4 | The bind 127.0.0.1 |
IP address of the bound host |
5 | timeout 300 |
The number of seconds after the client is idle to close the connection. If 0 is specified, this function is disabled |
6 | loglevel notice |
Redis supports four log levels: debug, verbose, NOTICE, and warning. The default log level is NOTICE |
7 | logfile stdout |
The logging mode is standard output by default. If Redis is configured to run in daemon mode and the logging mode is configured as standard output, the logs will be sent to /dev/null |
8 | databases 16 |
Set the number of databases. Default is 0. You can specify the database ID on the connection using the SELECT command |
9 | save <seconds> <changes> The Redis default configuration file provides three conditions:save 900 1 save 300 10 save 60 10000 |
Represents 1 change in 900 seconds (15 minutes), 10 changes in 300 seconds (5 minutes), and 10,000 changes in 60 seconds, respectively. Specifies how many updates to synchronize data to data files over a period of time |
10 | rdbcompression yes |
Redis uses LZF compression. If you want to save CPU time, you can disable this option, but it will cause database files to become huge |
11 | dbfilename dump.rdb |
Specifies the local database file name. The default is dump.rdb |
12 | dir ./ |
Specify the directory to store the local database |
13 | slaveof <masterip> <masterport> |
Set If the local host is slave, set the IP address and port number of the master service. When Redis starts, it automatically synchronizes data from the master service |
14 | masterauth <master-password> |
Password used by the Slav service to connect to the master when password protection is enabled for the Master service |
15 | requirepass foobared |
Set the Redis connection password. If the password is configured, the client needs to use the AUTH command to provide the password when connecting to Redis. The password is disabled by default |
16 | maxclients 128 |
The maximum number of client connections that Redis can open at a time is the maximum number of file descriptors that the Redis process can open at the same time. If maxClients 0 is set, there is no limit. Redis closes new connections and returns a Max number of clients reached error message to the client |
17 | maxmemory <bytes> |
Specifies the maximum memory limit for Redis. When Redis starts up, it loads data into memory. When the maximum memory is reached, Redis tries to clear expired or expiring keys first. The new VM mechanism of Redis will store keys in memory and values in swap |
18 | appendonly no |
Specifies whether to log after each update operation. Redis writes data to disk asynchronously by default. If not enabled, data may be lost for a period of time in the event of a power outage. Because redis itself synchronizes data files according to the above save conditions, some data will only exist in memory for a period of time. The default value is no |
19 | appendfilename appendonly.aof |
Appendone.aof specifies the update log file name. The default is appendone.aof |
20 | appendfsync everysec |
Three values are available: no: waits for the operating system to cache data and synchronize it to disk (fast) always: manually calls fsync() after each update operation to write data to disk (slow and safe) Everysec: synchronizes data every second (default) |
21 | vm-enabled no |
The default value is no. For a brief introduction, the VM mechanism stores data in paging mode. Redis swaps the less visited pages, i.e. cold data, to disk, and automatically swaps the more visited pages out of disk to memory (I will analyze the VM mechanism of Redis in detail in the next article). |
22 | vm-swap-file /tmp/redis.swap |
The default value is/TMP /redis.swap. This file cannot be shared by multiple Redis instances |
23 | vm-max-memory 0 |
All data larger than vM-max-memory is stored in virtual memory. No matter how small the VM-max-memory setting is, all index data is stored in memory (Redis index data is keys). In fact, all values exist on disk. The default value is 0 |
24 | vm-page-size 32 |
The Redis swap file is divided into many pages. An object can be stored on multiple pages, but a page cannot be shared by multiple objects. Vm-page-size is set according to the size of the stored data. The page size is best set to 32 or 64bytes; Use a larger page if you are storing large objects, or use the default if in doubt |
25 | vm-pages 134217728 |
Set the number of pages in the swap file. Since the page table (a bitmap indicating that a page is free or used) is placed in memory, every 8 pages on disk consumes 1byte of memory. |
26 | vm-max-threads 4 |
Set the number of threads that can access the swap file. It is best not to exceed the number of cores on the machine. If set to 0, all operations on the swap file are serial and may cause a long delay. The default value is 4 |
27 | glueoutputbuf yes |
This parameter specifies whether to combine small packets and send them as one packet when replying to clients. This parameter is enabled by default |
28 | hash-max-zipmap-entries 64 hash-max-zipmap-value 512 |
Specifies that a special hash algorithm is used when the number of elements exceeds a certain threshold or when the largest element exceeds a threshold |
29 | activerehashing yes |
Rehash specifies whether to enable rehash, which is enabled by default. |
30 | include /path/to/local.conf |
The inclusion of additional profiles allows you to use the same profile across multiple Instances of Redis on the same host, while each instance has its own specific profile |
18. Persistent RDB
RDB (Redis DataBase)
What is the
- Writes a Snapshot of an in-memory data set to disk at a specified time interval. A Snapshot is restored by reading the Snapshot file directly into memory
- Redis forks a separate subprocess for persistence, writing data to a temporary file that will be used to replace the last persistent file after the persistence process is complete. During the entire process, the main process does not perform any IO operations, which ensures extremely high performance. If large-scale data recovery is required and the integrity of data recovery is not very sensitive, RDB is more efficient than AOF. The downside of RDB is that data can be lost after the last persistence.
Fork
Fork copies the same process as the current one. All data values of the new process (variables, environment variables, program counters, etc.) are the same as those of the original process, but are a new process and a child of the original process
- RDB stores the dump. RDB file
- The relevant configuration is located in the configuration file – search in redis.conf
### SNAPSHOTTING ###
How do I trigger an RDB snapshot
-
Default snapshot configuration in the configuration file
dbfilename dump.rdb Copy the code
- Cold copy and then use again
- You can cp dump. RDB dump_new.rdb
- Cold copy and then use again
-
The command save or bgsave
- Save: Save only, ignore the other, all block, stop writing operation
- BGSAVE: Redis takes snapshots asynchronously in the background and responds to client requests. You can run the lastsave command to obtain the time when the last snapshot was successfully executed
-
The dump. RDB file is also generated when you run the flushall command, but it is empty and meaningless
How to restore
- Move the backup file (dump.rdb) to the redis installation directory and start the service
CONFIG GET dir
Access to the directory
Advantages and Disadvantages
- advantage
- Suitable for large-scale data recovery
- Low requirements for data integrity and consistency
- disadvantage
- Backups are made at regular intervals, so if Redis unexpectedly goes down, all changes since the last snapshot are lost
- At Fork, the data in memory is cloned, and roughly twice as much expansibility needs to be considered
Commonly used configuration
Save configuration:
10,000 times a minute
10 times in 10 minutes
Every 15 minutes
Stop-writing-on-bgsave-error yes Save The file will stop writing if an error occurs
Rdbcompression Yes Compress RDB files
Rdbchecksum yes After storing the snapshot, let Redis use the validation algorithm to verify it
How to stop
Redis -cli config set save “”
summary
- RDB is a very compact file.
- When saving the RDB file, the only thing the parent process needs to do is fork out a child process. The child process does all the following work. The parent process does not need to do other I0 operations, so the RDB persistence method can maximize the performance of Redis.
- Compared to AOF, RDB is faster for recovering large data sets.
- Data loss risks are high.
- The RDB often fork the child process to save the data set to the hard disk. When the data set is large, the fork process is very time-consuming, and may cause Redis to fail to respond to client requests for some milliseconds.
19. AOF of persistence
AOF (Append Only File)
What is the
Every write operation is recorded in the form of a log. All write instructions performed by Redis are recorded (read operations are not recorded). Only files can be appended but files cannot be overwritten. When redis is restarted, write instructions are executed from front to back according to the contents of the log file to complete data recovery
AOF configuration
- The relevant configuration is located in the configuration file – search in redis.conf
### APPEND ONLY MODE ###
- Appendonly. Aof holds the appendonly. Aof file (modify filename in configuration file)
AOF startup/repair/recovery
-
To restore normal
- Start: Set Yes
- Change the default appendonly no to yes
- Make a copy of the aOF file with data and save it to the corresponding directory (config get dir)
- Recovery: Restart Redis and reload
- Start: Set Yes
-
Abnormal return
-
Start: Set Yes
- Change the default appendonly no to yes
-
Back up the bad AOF file
-
Fixed:
-
Redis-check-aof –fix Fixes the fault
-
-
Recovery: Restart Redis and reload
######### APPEND ONLY MODE ############ appendonly yes # The name of the append only file (default: Appendfilename "appendonly. Aof "# appendfsync always poor performance, good data integrity appendfsync everysec # appendfsync no no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mbCopy the code
-
rewrite
- What is:
- AOF appends files, and files get bigger and bigger. To avoid this situation, a new override mechanism has been added. When the AOF file size exceeds the set threshold, Redis will start the content compression of the AOF file, keeping only the minimum instruction set that can recover the data. You can use the command bgrewriteaof
- Rewrite the principle
- If the AOF file continues to grow too large, a new process is forked to rewrite the file (also write temporary files first and rename the file last), traversing the memory of the new process, each record each Set statement. The operation of overwriting an AOF file, rather than reading the old AOF file, commands the entire contents of the database in memory to rewrite a new AOF file, similar to a snapshot
- triggering
- Redis records the AOF size of the last rewrite, triggered by default when the AOF file size is double the size since rewrite and the file size is greater than 64M
Advantages and Disadvantages
- advantage
- Each modification synchronization: appendfSync always Synchronous Persistence Each data change that occurs is immediately logged to disk with poor performance but better data integrity
- Synchronization per second: appendfsync everysec Asynchronous operation that records data loss if the service is down within one second
- Not synchronized: appendfsync no is not synchronized
- disadvantage
- For the data of the same data set, AOF files are much larger than RDB files, and the recovery speed is slower than RDB
- The running efficiency of Aof is slower than that of RDB, and the synchronization efficiency is better than that of RDB
summary
- AOF file is an append only log file
- Redis can automatically rewrite AOF in the background when AOF files become too large
- AOF files keep all the writes to the database in an orderly fashion in the Redis protocol format, making the contents of AOF files easy to read and analyze
- AOF files are usually larger than RDB files for the same data set
- Depending on the fsync strategy used, AOF may be slower than RDB
Which one
Performance suggestion: RDB is used as backup. It is recommended to persist RDB files only on the Slave. And it only takes 15 minutes to do it,
If AOF is enabled, it will only lose 2 seconds of data in the worst case, but it will result in continuous IO. In addition, the final process of rewriting will block the data generated in the rewrite process to a new file. Rewrite is typically done when the new AOF file is 100% larger than the previous AOF file and greater than 5G
If aOF is not enabled, high availability can be implemented using only master/slave replication, which can save a lot of IO overhead and avoid the blocking problem caused by rewriting. At the cost of this, if the master/slave fails at the same time, it may lose more than 10 minutes of data. Select a new one to restore
20. Transaction
What is the
You can execute more than one command at a time, which is essentially a collection of commands. All commands in a transaction are serialized, executed sequentially and not inserted by other commands, without any insertion.
Can do
The execution of a series of commands in a queue at one time, sequentially, and exclusively.
How to play
Common commands
The command | describe |
---|---|
DISCARD | Cancels the transaction, abandoning all commands in the transaction block. |
EXEC | Execute all commands within the transaction block. |
MULTI | Marks the start of a transaction block. |
UNWATCH | Unmonitor all keys with the WATCH command. |
WATCH the key [key]… | Monitor a key (or keys) and interrupt the transaction if the key (or keys) is changed by another command before the transaction executes. |
Case
Normal execution
Give up the transaction
All sit down
Similar to Java compilation exception
YuanTou creditor
Similar to Java running exception
Watch monitoring
Pessimistic lock/Optimistic lock /CAS(Check And Set)
- Pessimistic locking
- Pessimistic locks, as the name implies, are Pessimistic. Each time I fetch the data, I think someone else will change it, so I Lock the data each time I fetch it, so that someone else will try to fetch it and block it until it gets the Lock. Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock.
- Optimistic locking
- Optimistic Lock, as the name implies, is very Optimistic. Every time I go to get data, I think that others will not modify it, so I will not Lock it. But when UPDATING, I will judge whether others have updated the data during this period, and I can use the version number and other mechanisms. Optimistic locking is suitable for multi-read applications to improve throughput.
- Optimistic locking policy: Commit version must be greater than record current version to perform update
- CAS
Credit card balances available and owing
- Initialize the available credit card balance and debt
- No tampering, monitoring and then multi to ensure that the two amount changes in the same transaction
- Tampering with a plug
- The key is monitored, and if the key is changed, the execution of a subsequent transaction fails
- unwatch
- Once exec is executed, all monitoring locks added before will be removed (once)
summary
- The Watch directive, similar to an optimistic lock, does not execute the entire transaction queue if the Key value has been changed by another client, such as a list that has been pushed/popped by another client
- The WATCH command monitors multiple Keys prior to transaction execution. If any Key value changes after the WATCH, the EXEC command will abort the transaction and nullmulti-bulk reply will be returned to inform the caller that the transaction failed
Three phase
- Open: Starts a transaction with MULTI
- Enqueue: To enqueue multiple commands into a transaction that are not executed immediately but are placed in a transaction queue waiting to be executed
- Execution: transactions are triggered by the EXEC command
Three features
- Separate isolated operations: All commands in a transaction are serialized and executed sequentially. The transaction will not be interrupted by command requests from other clients during execution.
- There is no concept of isolation level: commands in the queue are not actually executed until the transaction is committed, because no instructions are actually executed until the transaction is committed, so there is no headache of “in-transaction queries seeing updates in the transaction and out-of-transaction queries not”
- No guarantee of atomicity: Redis if a command fails in the same transaction, subsequent commands are still executed without rollback
Does not follow the AI in traditional ACID
21. Message subscription publishing Introduction
Use the observer model to understand learning
A message communication pattern between processes in which the sender (PUB) sends messages and the subscriber (sub) receives messages.
The following figure shows channel Channel1 and the relationship between the three clients that subscribe to this channel — Client2, Client5, and Client1:
When a new message is sent to channel Channel1 via PUBLISH, the message is sent to the three clients that subscribed to it:
Common commands
The command | describe |
---|---|
PSUBSCRIBE pattern [] the pattern… | Subscribe to one or more channels that conform to a given pattern. |
[argument [argument…]] | View the subscription and publication system status. |
PUBLISH channel message | Sends the message to the specified channel. |
PUNSUBSCRIBE [pattern [pattern…]] | Unsubscribe all channels for a given mode. |
The SUBSCRIBE channel [channel]… | Subscribe to a given channel or channels of information. |
UNSUBSCRIBE [channel [channel…]] | To unsubscribe from a given channel. |
The instance
The following example demonstrates how publish subscription works. In our example we created a subscription channel called redisChat:
Redis 127.0.0.1:6379> SUBSCRIBE redisChat Reading messages... (press Ctrl-C to quit) 1) "subscribe" 2) "redisChat" 3) (integer) 1 123456Copy the code
Now, let’s restart a Redis client and publish twice on redisChat on the same channel so that subscribers can receive the messages.
Redis 127.0.0.1:6379> PUBLISH redisChat "redis is a great caching technique" (integer) 1 redis 127.0.0.1:6379> PUBLISH RedisChat "Learn redis by runoob.com" (integer) 1 # The subscriber's client will display the following message: 1) "message" 2) "redisChat" 3) "redis is a great caching technique" 1) "message" 2) "redisChat" 3) "Learn redis by runoob.com" 123456789101112131415Copy the code
Subscribe to multiple wildcards *
PSUBSCRIBE new*
1
Copy the code
Receive messages,
PUBLISH new1 redis2015
1
Copy the code
22&23. Master slave replication
22.
What is the
Jargon: A master/slaver mechanism that automatically synchronizes updated data from the host to the Slave host based on the configuration and policies. Master data is mainly written and Slave data is mainly read
Can do
- Reading and writing separation
- Disaster recovery
How to play
The preparatory work
-
Not fit for master (library)
-
Slave library configuration commands:
Slaveof Primary library IP Port of the primary libraryCopy the code
- Every time you disconnect from the master, you need to reconnect, unless you configure the redis.conf file
#### REPLICATION ####
) info replication
- Every time you disconnect from the master, you need to reconnect, unless you configure the redis.conf file
-
Modify configuration file details
- Conf file and press ‘redis[port].conf’ to rename it
- Open daemonize yes
- Pid file name
- Specify the port
- Log file name
- Dump. RDB name
Replication of British [ˌ repl ɪ ‘ke ɪ ʃ (goes) n] the [ˌ repl ɪ’ ke ɪ ʃ (goes) n] n. (painting) copy; Copy; Repeat (an experiment); An answer, esp to a defense
Use three recruit
A main two servants
- Init
- One Master and two slaves
- The log view
- The host log
- The standby log
info replication
- Master-slave problem demonstration
- Entry point problem? Slave1 and Slave2 are copied from scratch or from pointcuts? For example, from K4, can the previous 123 be copied?
- A: Copy from scratch; 123 can also be copied
- Can the slave machine write? Can set?
- A: The slave machine cannot write, the set machine cannot write, the host can write
- What happens after the host shutdown? The slave is on top or standing by
- A: From the machine or standing by (salted fish turn over, or salted fish)
- After the host comes back, the host adds new records, can the slave machine copy smoothly?
- A: Yes, continue to copy
- What happens when one of the slave machines goes down? Can it keep up with the big army?
- Every time you disconnect from the master, you need to reconnect it, unless you configure the redis.conf file to search for it
#### REPLICATION ####
)
- Every time you disconnect from the master, you need to reconnect it, unless you configure the redis.conf file to search for it
- Entry point problem? Slave1 and Slave2 are copied from scratch or from pointcuts? For example, from K4, can the previous 123 be copied?
23.
New generation
- The previous Slave can be the Master of the next Slave, and the Slave can also receive the connection and synchronization request of other Slaves, so the Slave acts as the next Master of the chain, which can effectively reduce the writing pressure of the Master (Slave of Slave or Slave of Slave).
- Mid-change redirect: erases the previous data and creates a new copy
Slaveof New master library IP New master library port
Going to
-
SLAVEOF no one Copy the code
- Causes the current database to stop synchronizing with other databases and become the primary database
The principle
- After connecting to the master successfully, the slave sends the sync command
- The master receives the command to start the background saving process and collects all the commands received to modify the data set. After the background process is complete, the master sends the entire data file to the slave for a complete synchronization
- Full replication: After receiving the database file data, the slave service saves it and loads it to the memory.
- Incremental replication: The Master sends all the new collected modification commands to the slaves in turn to complete the synchronization
- However, whenever the master is reconnected, a full synchronization (full copy) will be performed automatically
Sentinel mode (Sentinel)
A set of Sentinels can monitor multiple Masters simultaneously
What is the
The main automatic version, can background monitoring host failure, if the failure according to the number of votes will automatically convert from the main library
How to Play (Using steps)
- Adjustment structure, 6379 with 6380, 6381
- Create a new sentinel.conf file with the correct name
- Configure sentry, fill in the content
Sentinel Monitor Monitored host name (own name) 127.0.0.1 6379 1
- The last number, 1, indicates that salve votes to replace the host when the host dies. The number of votes salve gets to become the host (PS. Different from the description on the official website, there is an official document below)
- Start the sentry
redis-sentinel /sentinel.conf
(The preceding directories may vary depending on the actual situation.)
- Normal master-slave demo
- The old master died
- Voted the new
- Restart the primary and secondary nodes and continue to start. Check the information for Info Replication
Q: If a suspended master is restarted, will there be a conflict between the two masters? A: No, the original master becomes a slave
Sentinel monitor <master-group-name> < IP > <port> <quorum> Sentinel monitor Database name (own name) Address port 1 of the monitored hostCopy the code
For the sake of clarity, let’s check line by line what the configuration options mean:
The first line is used to tell Redis to monitor a master called mymaster, that is at address 127.0.0.1 and port 6379, with a quorum of 2. Everything is pretty obvious but the quorum argument:
- The quorum is the number of Sentinels that need to agree about the fact the master is not reachable, in order to really mark the master as failing, and eventually start a failover procedure if possible.
- However the quorum is only used to detect the failure. In order to actually perform a failover, one of the Sentinels need to be elected leader for the failover and be authorized to proceed. This only happens with the vote of the majority of the Sentinel processes.
So for example if you have 5 Sentinel processes, and the quorum for a given master set to the value of 2, this is what happens:
- If two Sentinels agree at the same time about the master being unreachable, one of the two will try to start a failover.
- If there are at least a total of three Sentinels reachable, the failover will be authorized and will actually start.
In practical terms this means during failures Sentinel never starts a failover if the majority of Sentinel processes are unable to talk (aka no failover in the minority partition).
Source
A) quorum B) quorum C) quorum D) quorum
Disadvantages of copying
Replication delay
Since all write operations are performed on the Master and then synchronized to the slave, there is a delay in synchronization from the Master to the slave. This delay is exacerbated when the system is busy, and is exacerbated by the increase in the number of slave machines.
24.Jedis_ Test Unicom
- Create a normal Maven project
- Pom.xml introduces the following key dependencies
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.1.0</version>
</dependency>
12345
Copy the code
- Create a new class, test connectivity, source code as follows:
TestPing.java
package com.lun.shang;
import redis.clients.jedis.Jedis;
public class TestPing {
public static void main(String[] args)
{
Jedis jedis = new Jedis("127.0.0.1".6379);
// Redis connected successfullySystem.out.println(jedis.ping()); }}12345678910111213
Copy the code
25. Jedis_ commonly used API
TestAPI.java
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Set;
import redis.clients.jedis.Jedis;
public class TestAPI {
public static void main(String[] args) {
Jedis jedis = new Jedis("127.0.0.1".6379);
// key
Set<String> keys = jedis.keys("*");
for (Iterator iterator = keys.iterator(); iterator.hasNext();) {
String key = (String) iterator.next();
System.out.println(key);
}
System.out.println("jedis.exists====>" + jedis.exists("k2"));
System.out.println(jedis.ttl("k1"));
// String
// jedis.append("k1","myreids");
System.out.println(jedis.get("k1"));
jedis.set("k4"."k4_redis");
System.out.println("-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --");
jedis.mset("str1"."v1"."str2"."v2"."str3"."v3");
System.out.println(jedis.mget("str1"."str2"."str3"));
// list
System.out.println("-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --");
// jedis.lpush("mylist","v1","v2","v3","v4","v5");
List<String> list = jedis.lrange("mylist".0, -1);
for (String element : list) {
System.out.println(element);
}
// set
jedis.sadd("orders"."jd001");
jedis.sadd("orders"."jd002");
jedis.sadd("orders"."jd003");
Set<String> set1 = jedis.smembers("orders");
for (Iterator iterator = set1.iterator(); iterator.hasNext();) {
String string = (String) iterator.next();
System.out.println(string);
}
jedis.srem("orders"."jd002");
System.out.println(jedis.smembers("orders").size());
// hash
jedis.hset("hash1"."userName"."lisi");
System.out.println(jedis.hget("hash1"."userName"));
Map<String, String> map = new HashMap<String, String>();
map.put("telphone"."13811814763");
map.put("address"."atguigu");
map.put("email"."[email protected]");
jedis.hmset("hash2", map);
List<String> result = jedis.hmget("hash2"."telphone"."email");
for (String element : result) {
System.out.println(element);
}
// zset
jedis.zadd("zset01".60d."v1");
jedis.zadd("zset01".70d."v2");
jedis.zadd("zset01".80d."v3");
jedis.zadd("zset01".90d."v4");
Set<String> s1 = jedis.zrange("zset01".0, -1);
for(Iterator iterator = s1.iterator(); iterator.hasNext();) { String string = (String) iterator.next(); System.out.println(string); }}}Copy the code
26. Jedis_ affairs
daily
import redis.clients.jedis.Jedis;
import redis.clients.jedis.Response;
import redis.clients.jedis.Transaction;
public class Test03 {
public static void main(String[] args) {
Jedis jedis = new Jedis("127.0.0.1".6379);
// Monitor the key and discard the transaction if it needs to be moved
/* * 3 jedis.watch("serialNum"); jedis.set("serialNum","s#####################"); * jedis.unwatch(); * /
Transaction transaction = jedis.multi();// is executed as a command
Response<String> response = transaction.get("serialNum");
transaction.set("serialNum"."s002");
response = transaction.get("serialNum");
transaction.lpush("list3"."a");
transaction.lpush("list3"."b");
transaction.lpush("list3"."c");
transaction.exec();
// 2 transaction.discard();
System.out.println("serialNum***********"+ response.get()); }}Copy the code
lock
TestTX.java
import redis.clients.jedis.Jedis;
import redis.clients.jedis.Transaction;
public class TestTX {
public boolean transMethod(a) throws InterruptedException {
Jedis jedis = new Jedis("127.0.0.1".6379);
int balance;// Available balance
int debt;/ / in
int amtToSubtract = 10;// Actual brush quota
jedis.watch("balance");
// jedis.set("balance","5"); // This sentence should not appear, the lecture is convenient. The entry has been modified by another program
Thread.sleep(7000);
balance = Integer.parseInt(jedis.get("balance"));
if (balance < amtToSubtract) {
jedis.unwatch();
System.out.println("modify");
return false;
} else {
System.out.println("***********transaction");
Transaction transaction = jedis.multi();
transaction.decrBy("balance", amtToSubtract);
transaction.incrBy("debt", amtToSubtract);
transaction.exec();
balance = Integer.parseInt(jedis.get("balance"));
debt = Integer.parseInt(jedis.get("debt"));
System.out.println("* * * * * * *" + balance);
System.out.println("* * * * * * *" + debt);
return true; }}/** * In plain English, the watch command marks a key. If a key is marked, the transaction will fail if the key is changed by someone else before the transaction is committed. In this case, you can usually try again in the program. * First mark the key balance, then check whether the balance is enough, cancel the mark if it is not enough, no deduction is made; If enough, the transaction is started to update. * If the key balance is modified by someone else in the meantime, an error will be reported during the commit transaction (exec), and the program can usually catch such errors and execute again until it succeeds. * *@throws InterruptedException
*/
public static void main(String[] args) throws InterruptedException {
TestTX test = new TestTX();
boolean retValue = test.transMethod();
System.out.println("main retValue-------: "+ retValue); }}Copy the code
27.Jedis_ Primary/secondary Replication
- 6379,6380 activated, individually first
- The master writes, the slave reads
TestMS.java
import redis.clients.jedis.Jedis;
public class TestMS {
public static void main(String[] args) {
Jedis jedis_M = new Jedis("127.0.0.1".6379);
Jedis jedis_S = new Jedis("127.0.0.1".6380);
jedis_S.slaveof("127.0.0.1".6379);
jedis_M.set("class"."1122V2");
String result = jedis_S.get("class");// There may be a delay, need to restart to useSystem.out.println(result); }}Copy the code
28.Jedis_JedisPool
JedisPoolUtil
- To get a Jedis instance, you need to get it from JedisPool
- Jedis instances need to be returned to JedisPool
- If Jedis is used incorrectly, it also needs to be returned to JedisPool
JedisPoolUtil.java
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
public class JedisPoolUtil {
private static volatile JedisPool jedisPool = null;
private JedisPoolUtil(a) {}public static JedisPool getJedisPoolInstance(a) {
if (null == jedisPool) {
synchronized (JedisPoolUtil.class) {
if (null == jedisPool) {
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxActive(1000);
poolConfig.setMaxIdle(32);
poolConfig.setMaxWait(100 * 1000);
poolConfig.setTestOnBorrow(true);
jedisPool = new JedisPool(poolConfig, "127.0.0.1".6379); }}}return jedisPool;
}
public static void release(JedisPool jedisPool, Jedis jedis) {
if (null! = jedis) { jedisPool.returnResourceObject(jedis); }}}Copy the code
Run the test
TestPool.java
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
public class TestPool {
public static void main(String[] args) {
JedisPool jedisPool = JedisPoolUtil.getJedisPoolInstance();
JedisPool jedisPool2 = JedisPoolUtil.getJedisPoolInstance();
System.out.println(jedisPool == jedisPool2);
Jedis jedis = null;
try {
jedis = jedisPool.getResource();
jedis.set("aa"."bb");
} catch (Exception e) {
e.printStackTrace();
} finally{ JedisPoolUtil.release(jedisPool, jedis); }}}Copy the code
Configuration summary
JedisPool configuration parameters are mostly assigned by the corresponding items of JedisPoolConfig.
-
MaxActive: Controls how many jedis instances can be allocated to a pool, using pool.getResource(); If the value is -1, there is no limit; If the pool has already allocated maxActive jedis instances, the pool is in the exhausted state.
-
MaxIdle: Controls the number of jedis instances in a pool that are in idle state.
-
whenExhaustedAction
Allocated Jedis: Indicates the operation to be taken when all jedis instances in the pool have been allocated. By default, there are three.
- When_hausted_fail –> NoSuchElementException is thrown when there is no jedis instance.
- When_hausted_block –> blocked, or JedisConnectionException thrown when maxWait is reached;
- When_hausted_grow –> indicates that a new jedis instance is created, i.e. maxActive is useless.
-
MaxWait: Indicates the maximum wait time when borrow a Jedis instance. If the wait time is exceeded, JedisConnectionException is thrown directly.
-
TestOnBorrow: Whether to check connection availability when obtaining a Jedis instance (ping()); If true, the resulting Jedis instances are all available;
-
TestOnReturn: whether to check connection availability when a jedis instance is sent to the pool (ping());
-
TestWhileIdle: If true, an idle object evitor thread will scan the idle object. If validate fails, the object will be dropped from the pool. This item is only meaningful when timeBetweenEvictionRunsMillis is greater than 0;
-
TimeBetweenEvictionRunsMillis: it means the idle object evitor between two scans to sleep the number of milliseconds;
-
NumTestsPerEvictionRun: indicates the maximum number of objects scanned by the Idle Object evitor at a time. minEvictableIdleTimeMillis
-
MinEvictableIdleTimeMillis: an object at least stay in the idle state of the shortest time, and then you can be idle object evitor scans and expelling; This item is only meaningful when timeBetweenEvictionRunsMillis is greater than 0;
-
SoftMinEvictableIdleTimeMillis: on the basis of minEvictableIdleTimeMillis, joined at least minIdle target has had in the pool. If it is -1, Evicted does not expel any objects based on idle time. If minEvictableIdleTimeMillis > 0, then this setting meaningless, and is only meaningful when timeBetweenEvictionRunsMillis is greater than 0;
-
When lifo: borrowObject returns an object, DEFAULT_LIFO (last in first out, i.e. the most frequently used queue similar to cache) is adopted. If it is False, it indicates FIFO queue.
The default Settings of JedisPoolConfig for some parameters are as follows:
- testWhileIdle=true
- minEvictableIdleTimeMills=60000
- timeBetweenEvictionRunsMillis=30000
- numTestsPerEvictionRun=-1
29. To learn
- Redis clustering: Disadvantages of Redis sharding, clustering architecture, basic commands for cluster operation.
- Introduction to the Lua scripting language.
- Redis and Lua combined, Redis Lua scripting, building a powerful Redis service.
- Redis integrates Spring, etc.
- Redis cluster realizes Session sharing of Tomcat cluster, etc
- Redis VM mechanism