Redis persistence mechanism

Redis is an in-memory database that supports persistence. Through the persistence mechanism, data in memory is synchronized to disk files to ensure data persistence. When Redis is restarted, data can be recovered by reloading the hard disk files into memory.

Implementation: create a separate fork() a child process, copy the current parent process database data into the child process memory, and then write the child process into a temporary file, the persistent process ends, and then use this temporary file to replace the last snapshot file, and then the child process exits, memory release.

RDB is the default Redis persistence mode. Memory data is saved as a snapshot to a binary file on a hard disk at a specified period of time. The data file is dump. RDB. You can use the save parameter in the configuration file to define the Snapshot period. (A snapshot can be either a copy or a replica of the data it represents.) AOF: Redis appends every received Write command to the end of the file using the Write function, similar to MySQL’s binlog. When Redis restarts, it reconstructs the contents of the entire database in memory by re-executing the write commands saved in the file.

When both methods are enabled, Redis preferentially selects AOF for data recovery.

Cache avalanche, cache penetration, cache warming, cache update, cache degradation and other issues

Cache avalanche

Cache avalanche can be simply understood as: because the old cache is invalid, the new cache is not in the period

(For example, we used the same expiration time when setting the cache, and a large area of cache expiration occurred at the same time), all the requests that should have access to the cache were queried by the database, which caused great pressure on the DATABASE CPU and memory, and even caused database downtime. It creates a chain reaction that causes the whole system to collapse.

Solutions:

Most system designers consider locking (the most common solution) or queuing to ensure that there are not too many threads reading or writing to the database at once, thus avoiding a large number of concurrent requests falling on the underlying storage system in the event of a failure. Another simple solution is to spread out cache expiration times.

The cache to penetrate

Cache penetration is when the user queries for data that is not present in the database and certainly not present in the cache. As a result, the user will not find the query in the cache, and will have to query the database again each time, and then return null (equivalent to two useless queries). This allows the request to bypass the cache and go directly to the database, which is a common cache hit ratio issue.

Solutions:

The most common is to use a Bloom filter, hash all possible data into a large enough bitmap, a non-existent data will be intercepted by the bitmap, thus avoiding the query pressure on the underlying storage system.

A more crude approach is that if a query returns empty data (either nonexistent or a system failure), we still cache the empty result, but its expiration time is short, no more than five minutes. The simplest way to do this is to store the default values directly in the cache so that the second time you fetch the values in the cache, you don’t have to continue accessing the database.

5TB hard disk is full of data, please write an algorithm to rearrange the data. What if the data is 32bit? What if it’s 64-bit?

One extreme use of space is bitmaps and Bloom filters.

Bitmaps: Typically hash tables

The downside is that bitmaps can only record 1bit of information per element, so if you want to accomplish additional functions, you may have to sacrifice more space and time to do so.

Bloom filter (recommended)

It is to introduce k(k>1) and K (k>1) mutually independent hash functions to ensure the completion of element weight determination in a given space and misjudgment rate.

Its advantage is that its space efficiency and query time are far more than the general algorithm, but its disadvantage is that it has certain error recognition rate and deletion difficulty.

The core idea of Bloom-Filter algorithm is to use multiple different Hash functions to resolve “conflicts”.

Hash has a collision problem, where two urls with the same Hash may have the same value. In order to reduce conflicts, we can introduce more hashes, and if we conclude that an element is not in the set by one of the hashes, then that element is definitely not in the set. The element is in the collection only if all the Hash functions tell us that it is. This is the basic idea behind the Bloom-filter.

Bloom-Filter is generally used to determine whether an element exists in a large amount of data set.

Cache penetration vs. cache breakdown

Cache breakdown: when a key is very hot, it is constantly carrying large concurrency, and the large concurrency is concentrated on this point to access. When the key fails at the moment, the continuous large concurrency will Pierce the cache and directly request data.

Solution: Before accessing a key, use SETNX (Set if not exists) to set another short-term key to lock the current key and delete the short-term key when the access is complete.

To give a case that our company deals with: background dual-machine access to token, one copy of token is saved to Redis to ensure that the system only has one thread to obtain token when the token expires; The online environment has two machines, so a distributed lock implementation is used.

Cache preheating

Cache preheating this should be a relatively common concept, I believe many friends should be easy to understand, cache preheating is the system online, the relevant cache data directly loaded into the cache system. This avoids the problem of first querying the database and then caching the data when the user requests it! Users directly query cached data that has been preheated in advance!

Solution:

  • Write a cache refresh page directly, manual operation when on-line;
  • The amount of data is not large and can be loaded automatically when the project is started.
  • Periodically refresh the cache;

4. Cache update

In addition to the cache invalidation policies provided by the cache server (Redis has 6 policies to choose from by default), we can also customize the cache invalidation policies based on specific business requirements. There are two common policies:

  • Periodically clear expired caches.
  • When there is a user request, and then determine whether the cache used for this request expires, expired to the underlying system to get new data and update the cache.

Both have their own advantages and disadvantages, the first disadvantage is to maintain a large number of cache keys is more troublesome, the second disadvantage is that each user request to come over to determine cache invalidation, logic is relatively complex! You can weigh which solution to use according to your own application scenario.

5. Cache degradation

When traffic surges, service problems (such as slow or unresponsive response times) occur, or non-core services affect the performance of the core process, you still need to ensure that the service is still available, even at the expense of the service. The system can automatically degrade according to some key data, or manually degrade by configuring switches.

The ultimate goal of a downgrade is to ensure that the core service is available, even if it is lossy. And some services can’t be downgraded (add to cart, checkout).

To reference log level setup plan:

  • General: For example, if some services time out due to network jitter or online services, they can be automatically degraded.
  • Warning: If the success rate of some services fluctuates within a period of time (for example, between 95 and 100%), the service can be automatically degraded or manually degraded, and an alarm is sent.
  • Error: for example, the availability rate is less than 90%, or the database connection pool is hit, or the number of visits suddenly jumped to the system can bear the maximum threshold, at this time can be automatically degraded or manually degraded according to the situation;
  • Critical error: For example, the data is wrong due to special reasons, and an emergency manual downgrade is required.

The purpose of service degradation is to prevent the Redis service failure from causing an avalanche of database problems. Therefore, for unimportant cached data, service degradation strategy can be adopted. For example, a common practice is that Redis does not query the database, but directly returns the default value to the user.

What is hot data and cold data

Hot data, cache is valuable

For cold data, most of the data may be squeezed out of memory before it is accessed again, taking up memory and having little value. For frequently modified data, consider caching as appropriate

For the above two examples, both the birthday list and the navigation information have a characteristic that the information is not frequently modified and the scene is usually read very high.

For hot data, such as one of our IM products, birthday greeting module, and the list of birthday stars of the day, the cache may read hundreds of thousands of times. Another example is a navigation product where we cache the navigation information and then read it millions of times.

To make sense, the cache must be read at least twice before the data is updated. This is the most basic strategy, and if the cache fails before it can take effect, it won’t be of much value.

What about scenarios that exist and have a high frequency of changes, but you have to worry about caching? There are! Read the interface to the database, for example, the pressure is very big, but it is hot spot data, this time will need to be considered by caching means, reduce the pressure of the database, such as our some assistant product, thumb up, collect number, the number of share is a hotspot of typical data, but also constantly changing, right now you need to save the data synchronization to Redis cache, Reduce database stress.

What are the differences between Memcache and Redis?

1) Memecache stores all the data in memory. It will hang up after power failure. The data cannot exceed the size of memory. Redis stores some of its data on hard disk, and Redis can persist its data

Memcached all values are simple strings. Redis is used as a replacement for these simple strings. It supports more rich data types, such as list, set, zset, hash and other data structures

3) Different underlying models are used. The underlying implementation modes between them and the application protocols for communication with clients are different. Redis directly built the VM mechanism itself, because normal system calls to system functions would waste a certain amount of time moving and requesting.

4). Value value size: Redis can be up to 512 MB; Memcache is only 1MB.

5) Redis is much faster than memcached

6) Redis supports data backup, that is, data backup in master-slave mode.

Why is single-threaded Redis so fast

(1) Pure memory operation

(2) Single thread operation, avoid frequent context switch

(3) the use of non-blocking I/O multiplexing mechanism

Redis data types and usage scenarios for each data type

Answer: Five in all

(a) String

The most common set/get operation, value can be either a String or a number. Generally do some complex counting function of the cache.

(2) of the hash

Here value stores a structured object, and it is convenient to manipulate a field within it. Bloggers use this data structure to store user information when they do single sign-on. CookieId is taken as the key and 30 minutes is set as the cache expiration time, which can well simulate the effect similar to session.

(3) of the list

Using the List data structure, you can do simple message queue functions. In addition, there is another, can use lrange command, do based on redis paging function, excellent performance, good user experience. I also use a scene, very appropriate – take market information. It’s also a producer and consumer scenario. LIST can be a good way to complete the queuing, first-in, first-out principle.

(4) the set

Because a set is a collection of non-repeating values. So you can do global deduplication function. Why not use a Set that comes with the JVM? Because our system is generally clustered deployment, using the JVM’s own Set, it is more troublesome, do a global to redo, a public service, too much trouble.

In addition, it is the use of intersection, union, difference set and other operations, can calculate common preferences, all preferences, their own unique preferences and other functions.

(5) sorted set

Sorted set has a weight parameter score. The elements in the set can be sorted according to score. You can do ranking application, take TOP N operation.

Redis internal structure

  • Searching is a data structure for maintaining key and value mappings, similar to maps or dictionaries in many languages. It’s essentially Searching in the algorithm
  • SDS SDS is the same as char *. It can store arbitrary binary data and cannot be identified by the character ‘\0’ as a string in C, so it must have a length field.
  • A skiplist is a simple, single-layer, multi-pointer linked list that is as efficient in lookup as an optimized binary balanced tree and more efficient than a balanced tree implementation,
  • quicklist
  • Ziplist is an encoded list. It is a sequential data structure composed of a series of specially encoded contiguous memory blocks.

Redis expiration strategy and memory flushing mechanism

Redis uses a periodic delete + lazy delete strategy.

Why not use a timed deletion policy?

A timer is used to monitor the key. If the key expires, it will be deleted automatically. Although memory is released in time, it consumes CPU resources. In the case of large concurrent requests, the CPU spends time processing the request, not deleting the key, so this strategy is not used.

How does periodic deletion + lazy deletion work?

Delete periodically. By default, Redis checks every 100ms for expired keys and deletes expired keys. It should be noted that Redis does not check all keys once every 100ms, but randomly selects and checks them (if all keys are checked every 100ms, Redis will not be stuck). Therefore, if you only use the periodic deletion policy, many keys will not be deleted.

Here, lazy delete comes in handy. So when you get a key, Redis will check, is that key expired if it’s expired? If it is out of date it will be deleted.

Is there no other problem with periodic deletion + lazy deletion?

No, if the key is not deleted regularly. Then you didn’t ask for the key immediately, so lazy deletion didn’t take effect either. As a result, Redis memory gets higher and higher. Then memory flushing should be used.

There is a line of configuration in redis.conf

maxmemory-policy volatile-lru1
Copy the code

This configuration is configured with a memory elimination policy (what, you haven’t? Take a good look at yourself.

  • Volatile – lRU: Selects the least recently used expires data from a set with an expiration date (server.db[I].expires)
  • Volatile – TTL: Selects expired data from a set (server.db[I].expires) to be discarded
  • Volatile -random: Selects any data from a set with an expiration date (server.db[I].expires) to be discarded
  • Allkeys-lru: Culls the least recently used data from the dataset (server.db[I].dict)
  • Allkeys-random: Random selection of data from a dataset (server.db[I].dict)
  • No-enviction: forbids the expulsion of data. New write operations will report an error

Ps: Prerequisites are not met if the EXPIRE key is not set. Volatile – LRU, volatile- Random and volatile- TTL policies behave essentially the same as Noeviction policies.

Why is Redis single threaded

According to the official FAQ, because Redis is a memory-based operation, CPU is not the bottleneck of Redis. The bottleneck of Redis is most likely the size of machine memory or network bandwidth.

Since single-threading is easy to implement and the CPU is not a bottleneck, it makes sense to go with a single-threaded solution (there is a lot of trouble with multi-threading after all!). Redis uses queue technology to turn concurrent access into serial access

1) The vast majority of requests are pure memory operations (very fast)

2) Single thread is adopted to avoid unnecessary context switching and race conditions

3) Advantages of non-blocking IO:

  • Fast, because the data is in memory, similar to a HashMap, which has the advantage of having O(1) time for both lookup and operation.
  • Support rich data types, support string, list, set, sorted set, hash
  • Transactions are supported, and operations are atomic, meaning that changes to data are either all or none
  • Rich features: can be used for cache, message, by key set expiration time, after expiration will be automatically deleted

Multiple subsystems set a key simultaneously. What should we pay attention to at this time?

Redis transactions are not recommended. Because our production environment, basically redis cluster environment, do data sharding operation. If you have multiple key operations in a transaction, they may not all be stored on the same Redis-server. Therefore, redis transaction mechanism, very weak.

  • If the key operation is not sequential: prepare a distributed lock, everyone grab the lock, grab the lock to do the set operation
  • If you operate on this key, order is required: distributed lock + timestamp. Assuming system B grabs the lock first, set key1 to {valueB 3:05}. Then system A grabs the lock and finds that its valueA timestamp is older than the timestamp in the cache, so the set operation is not performed. And so on.
  • Using queues, turning set methods into serial access also allows Redis to encounter high concurrency if the consistency of read and write keys is guaranteed

Operations on Redis are atomic and thread-safe, you don’t have to worry about concurrency,redis already handles concurrency for you.

What should Redis cluster solution do? What are the options?

1. The general concept of TwemProxy is that it is similar to a proxy method. When using twemProxy, it should connect to Redis instead of twemProxy, and it will receive requests as a proxy and use consistent hash algorithm to forward requests to specific Redis. Return the result to twemProxy.

Disadvantages: The pressure of twemproxy’s own single-port instance, the change of the calculated value when the number of Redis nodes changes after using consistent hash, and the data cannot be automatically moved to the new node.

2. Codis, the most widely used cluster scheme, has basically the same effect as Twemproxy, but it supports data of old nodes to be restored to new hash nodes when the number of nodes changes

3. Redis Cluster3.0 comes with its own cluster, which is characterized by the concept of hash slot rather than consistent hash algorithm and its support for setting slave nodes. See the official documentation for details.

Have you attempted a multi-machine Redis deployment? How to ensure data consistency?

Master/slave replication, read/write separation

One is the master database (master) and the other is the slave database (slave). The master database can carry out read and write operations, and automatically synchronize the data to the slave database when the write operation occurs. The slave database is generally read-only, and receives the data synchronized from the master database. A slave database can only have one master database.

How to handle the large number of requests

Redis is a single-threaded program, meaning that it can only handle one client request at a time;

Redis processes multiple client requests through IO multiplexing (select, Epoll, kqueue, depending on the platform, different implementations)

Redis Common performance issues and solutions?

(1) It is best for the Master not to do any persistent work, such as RDB memory snapshots and AOF log files

(2) If the data is important, a Slave enables AOF backup, and the policy is set to synchronize data once per second

(3) For the speed of Master/Slave replication and connection stability, it is better for Master and Slave to reside in the same LAN

(4) Try to avoid adding slave libraries to the master library under great pressure

Master < -slave1 < -slave2 < -slave3…

Explain the Redis threading model

File event handlers include sockets, I/O multiplexers, file event dispatchers, and event handlers. Use an I/O multiplexer to listen on multiple sockets at the same time and associate the socket with different event handlers based on the task the socket is currently performing.

When the socket being listened to is ready to perform accept, read, write, close, etc., file events corresponding to the operation are generated, and the file event handler invokes the event handler associated with the socket to handle these events.

The I/O multiplexer listens for multiple sockets and passes those sockets that generated the event to the file event dispatcher.

Working principle:

The I/O multiplexer listens for multiple sockets and passes those sockets that generated the event to the file event dispatcher.

Although multiple file events may occur concurrently, the I/O multiplexer will always enqueue all event-generating sockets into a queue and pass through that queue. To send a socket to a file event dispatcher in a sequentially, synchronously, one-socket at a time:

After the event generated by the previous socket has been processed (the socket has been executed for the event handler associated with the event), the I/O multiplexer continues to transmit the next socket to the file event dispatcher. If a socket is both readable and writable, the server reads the socket first and writes the socket later.

Why are Redis operations atomic and how are they guaranteed to be atomic?

For Redis, the atomicity of a command means that an operation is not separable and that the operation is either executed or not executed.

Redis operations are atomic because Redis is single-threaded. (The new version of Redis has introduced multi-threading, which is based on the old version of Redis)

All apis provided by Redis are atomic operations, and transactions in Redis are meant to be atomic for batch operations.

Are multiple commands atomic in concurrency?

Not necessarily, change get and set to single command operation, incr. Transactions using Redis, or Redis+Lua==.

Redis transactions

Redis transaction functionality is implemented through four primitives: MULTI, EXEC, DISCARD and WATCH

Redis serializes all the commands in a transaction and executes them sequentially.

  1. Redis does not support rollback “Redis does not roll back when a transaction fails, but continues to execute the remaining commands”, so the internal redis can be kept simple and fast.
  2. If a command in a transaction fails, none of the commands are executed.
  3. If a run error occurs in a transaction, the correct command is executed.

Note: Redis discard only ends the transaction. The effect of the correct command is still present.

1) The MULTI command is used to start a transaction, which always returns OK. MULTI after execution, the client can send any number of commands to the server. These commands are not executed immediately, but are placed in a queue. When the EXEC command is invoked, all commands in the queue will be executed.

2) EXEC: Execute all the commands in the transaction block. Returns the return value of all commands in the transaction block, in the order in which they were executed. Returns nil when the operation is interrupted.

3) By calling DISCARD, the client can empty the transaction queue and abandon the transaction, and the client exits from the transaction state.

4) The WATCH command can provide check-and-set (CAS) behavior for Redis transactions. One or more keys can be monitored, and once one of them is modified (or deleted), subsequent transactions are not executed, up to the EXEC command.

Redis implements distributed locking

Redis is a single-process single-thread mode, using the queue mode to change the concurrent access into serial access, and there is no competition between multiple clients on Redis connection. In Redis, SETNX command can be used to achieve distributed lock.

Set the value of key to value if and only if the key does not exist. If the given key already exists, SETNX does nothing

Unlock: Use the del key command to release the lock

Resolve deadlocks:

  • Expire () is used to set the maximum holding time for the lock. If it is exceeded, Redis will release the lock for us.
  • Use setnx key “current system time + lock holding time” and getSet key “current system time + lock holding time” combination command can be achieved.

The last

Thank you for reading here, the article is inadequate, welcome to point out; If you think it’s good, give me a thumbs up.

Also welcome to pay attention to my public number: programmer Maidong, Maidong will share Java related technical articles or industry information every day, welcome to pay attention to and forward the article!