Simply put, Redis is a database. Unlike traditional databases that store data on disk, Redis stores data in memory, so it is often called an in-memory database. At the same time Redis storage is also called NoSQL database, non-relational database. Redis is widely used for caching and other scenarios that require high-speed reads and writes because it keeps data in memory.

Why use Redis

Redis was introduced into software architecture because it was “fast and strong”.

1. Fast means high performance

Computer hardware speed from low to high: hard disk – network – memory -CPU;

In a traditional database, accessing a piece of data in the database for the first time is usually slow because the database needs to read data from the hard disk. Data in Redis is kept in memory, so it is much faster than reading data from disk.

So we often use Redis as a cache: the first time we read data from the database, put it in Redis, and then access Redis directly.

2. Strong: stability in high concurrency scenarios (high availability)

Under the scenario of high concurrency, Redis can withstand the access limit is much larger than the database, so we can consider putting the data requiring high concurrency into Redis; For example, the second kill function can have hundreds of thousands of hits in just a few seconds. If you operate the database directly, the database can be destroyed instantly.

Which scenarios don’t fit into Redis

Of course, not all scenarios and not all data will fit into Redis, but generally we need to consider the following points:

  • Is the hit rate of data queries high? If the cache hit ratio is low, there is no need to put it in Redis;
  • How about data reads and writes? If data is frequently written (add, change, delete) and the number of writes is greater than the number of reads, there is no need to use Redis.
  • How big is the business data? If you want to store files, there is no need to put them in Redis.

Local cache or Redis

Caches are divided into local caches and distributed caches:

1. Local cache

For example, Guava, Ehcache, and even storing caches in maps are local caches; Local caches are lightweight, simple to implement, and their life cycle ends when the JVM is destroyed. But if there are multiple instances of the program (the program is deployed in multiple sets), the caches in each instance are not consistent.

2. Distributed cache

Redis is called distributed cache. If there are multiple instances of the program, each instance can share the cached data in Redis. However, due to the introduction of Redis, it is necessary to ensure the high availability of Redis, which is more complex in architecture.

Redis or Memcached

Memcached is also often used as a cache, a type of distributed cache, so how is it different from Redis?

  • Redis supports richer data types, while Memcache supports the simple String data type.
  • Redis supports persistent data, which can save the data in memory to the hard disk and load the data into memory after restarting, while Memcache only saves the data in memory.
  • Redis currently supports cluster mode, but Memcached has no native cluster mode. You need to implement it yourself.
  • Redis uses a single-threaded multiplex I/O multiplexing model (Redis began to support multithreading in the latest version 6.0); Memcached uses a multi-nonblocking IO reuse network model.
Difference between Redis and Memcache

Finally, do you want to introduce Redis? Local cache or distributed cache? Should be based on the actual situation of the project; Redis, with its rich data types and support for persistence, would be a better fit for our project.

Uncle will point code | article “original”


Please pay attention to the uncle who can code