preface
- Interview Question 1: Why use Redis? Where does the business come from?
- Question 1: What data types are in Redis?
- Question 2: What is the difference between Redis and Memcached?
- Follow-up 3: How did Redis prevent abnormal data from being lost? How to persist?
- Interview Question 2: Why is Redis single threaded?
- Follow-up question 1: single thread only uses a single core CPU, too wasteful, what is the way to play multi-core CPU performance?
- Interview Question 3: How about your understanding of cache penetration, cache penetration, cache avalanche?
- Question1: What would you say about the solution to the cache breakdown?
Interview Question 1: Why use Redis? Where does the business come from?
Serious answer:
Redis is one of the best known ways to reduce high concurrency and increase high availability, and it has been very effective in improving server performance. 2021 Java true question interview document to share with you.
We have to mention the high concurrency scenario here. As we know, the core point of the concurrent scenario is the database. The purpose of introducing caching (and any load balancing, clustering, etc.) is to reduce the pressure on the database, so that more requests to the DB will be intercepted and processed in the middle. Like when you ask for time off and get signed off by the big boss?
In plain English, high concurrency to the server, like you were hammed a fist, this fist is very hard, shirtless words give me dry vomiting blood. So, to take this punch? Wear a padded jacket, padded pad, and… Yeah, as long as it’s thick enough, I think you’re tickling me.
Anyway, how thick and bounces is it? The operation cache is the direct operation of memory, which is quite fast, the direct operation cache can handle the number of requests is far greater than the direct access to the database.
Redis advantage:
- The read and write performance of Redis is excellent. The read speed of Redis is 110000 times /s, and the write speed is 81000 times /s.
- Supports data persistence, AOF and RDB.
- Support for transactions, all operations in Redis are atomic, and Redis also supports atomicity when several operations are combined.
- In addition to the string value, the system supports hash, set, zset, and list data structures.
- Supports master/slave replication. The master automatically synchronizes data to the slave, and reads and writes can be separated.
- Supports a large number of cluster nodes.
Suppose the user accesses some data in the database for the first time. This process can be slow because it is being read from the hard disk. The data accessed by the user is stored in Redis, so that the next time the data is accessed, it can be retrieved directly from the cache. Also, we can move some of the data from the database to the cache, so that some of the user’s requests go directly to the cache rather than to the database. If the corresponding data changes in the database, the corresponding data in the cache can be synchronized!
In our business, including hot word query, some real-time ranking data, traffic and likes statistics, Session sharing and so on, can be introduced into Redis to deal with.
Further inquiry:
Question 1: What data types are in Redis?
Rich data types. Redis has 8 data types, of course, the most commonly used are String, Hash, List, Set and SortSet, all of which are based on key values to organize data. Each data type provides a rich variety of operation commands that can meet most needs, and if you have special needs, you can create your own commands (atomic) through lua scripts.
Question 2: What is the difference between Redis and Memcached?
Both are non-relational memory key databases. Companies use Redis for caching these days. Why not use Memcached?
- All of memcached’s values are simple strings; Redis, as an alternative, supports richer data types
- Redis is much faster than memcached
- Redis is able to persist data to disk, which is critical, and power outages are no longer a pain.
Follow-up 3: How did Redis prevent abnormal data from being lost? How to persist?
RDB persistence (snapshot)
- A snapshot is generated for all data at a point in time and saved to a hard disk. When you have a lot of data, it’s very slow.
- You can copy the snapshot to another server to create a copy of the server with the same data.
- If the system is faulty, data generated after the last snapshot is created will be lost.
AOF persistence (instant updates)
- Add the write command to the end of the AOF File (Append Only File).
- Using AOF persistence requires setting synchronization options to ensure that the timing of the write command is synchronized to the disk file. This is because writing to a file does not immediately synchronize the contents to disk; instead, it is stored in a buffer and then the operating system decides when to synchronize to disk.
There are the following synchronization options (synchronization frequency) : always Synchronizes every write command; Everysec synchronizes once per second; No lets the operating system decide when to synchronize. The EverySEC option is appropriate, ensuring that only a second or so of data is lost in the event of a crash, and Redis synchronizing once per second has virtually no impact on server performance
Interview Question 2: Why is Redis single threaded?
Redis is single threaded. How can I exploit multiple CPU / cores?
It’s not very frequent that CPU becomes your bottleneck with Redis, as usually Redis is either memory or network bound. For instance, using pipelining Redis running on an average Linux system can deliver even 1 million requests per second, so if your application mainly uses O(N) or O(log(N)) commands, it is hardly going to use too much CPU.
However, to maximize CPU usage you can start multiple instances of Redis in the same box and treat them as different servers. At some point a single box may not be enough anyway, so if you want to use multiple CPUs you can start thinking of some way to shard earlier.
You can find more information about using multiple Redis instances in the Partitioning page.
However with Redis 4.0 we started to make Redis more threaded. For now this is limited to deleting objects in the background, and to blocking commands implemented via Redis modules. For future releases, the plan is to make Redis more and more threaded.
Serious answer:
The bottleneck in Redis is not CPU speed, but network bandwidth and machine memory size. Moreover, single-thread switching is cheap and easy to implement. Since single-threading is easy to implement and the CPU is not a bottleneck, it makes sense to adopt a single-threaded solution, and of course, to avoid the pitfalls of multi-threading. By the way, a node is a single thread.
Further inquiry:
Follow-up question 1: single thread only uses a single core CPU, too wasteful, what is the way to play multi-core CPU performance?
We can do this by opening multiple Instances of Redis on a single server. We’ve been emphasizing single threading, but only one thread will handle our network requests. In fact, a formal Redis Server running time is certainly more than one thread, are clustered form, how many nodes, so in the actual environment we do not have to worry about this problem.
Interview Question 3: Talk about your understanding of cache penetration, cache penetration, cache avalanche
Serious answer:
- Cache penetration: Refers to the absence of data in the cache and database, resulting in all requests to the database, and then the database can not be found (such as null), resulting in a short period of time database threads are full, resulting in other services blocked, and finally online services unavailable, this situation is usually from students.
- Cache breakdown: refers to the data that does not exist in the cache but does exist in the database (generally, the cache time of hot data expires). At this time, due to the large number of concurrent users, the data is not read from the cache at the same time, and the database is checked at the same time, which causes the instantaneous increase of pressure on the database and the online system is stuck.
- Cache avalanche: a large number of cache failures at the same time, a cache breakdown upgrade.
Further inquiry:
Question1: What would you say about the solution to the cache breakdown?
- According to the actual service situation, maintain a hotspot data table in Redis, set it to never expire (such as top1000) in batches, and update top1000 data regularly.
- Add a mutex key
After a mutex cache breakdown, multiple threads will simultaneously query the data in the database, so we can use a mutex lock on the first request to query the data. Other threads get to this point and wait until the first thread queries the data and then caches it. Later threads come in and see that there is already a cache and go straight to the cache.
static Lock reenLock = new ReentrantLock(); public List<String> getData04() throws InterruptedException { List<String> result = new ArrayList<String>(); Result = getDataFromCache(); If (result.isempty ()) {if (reenlock.trylock ()) {try {system.out.println (" get the lock, write to the cache from DB "); Result = getDataFromDB(); SetDataToCache (result); setDataToCache(result); } finally { reenLock.unlock(); }} else {result = getDataFromCache(); If (result.isempty ()) {system.out.println (" I don't have the lock, I don't have the data in the cache "); Thread.sleep(100); Return getData04(); // Retry}}} return result; }Copy the code
The last
I have organized a Redis related information documents, Spring series family bucket, Java systematic information (including Java core knowledge points, interview topics and 21 years of the latest Internet real questions, e-books, etc.) friends in need can follow the public account [program Yuanxiaowan] to get.