This is the 15th day of my participation in the August More Text Challenge
Life is too short to learn Python together
Antecedents feed
After going through untold hardships, zhuang finally waited for the next interviewer to come, only to see a bright and intelligent interviewer came in. I don’t know what the big guy wants to ask?
The interview begins…
Interviewer: Young man, the last interviewer was very satisfied with you, so why don’t you introduce yourself
I graduated from Tsinghua University and majored in computer science. I used to work in XXX, mainly using XXX technology stack. I am considering changing my job for XXX reasons.
Interviewer: Yeah, I already know the basics. I’ve already covered Python, so LET me just ask some questions about Redis.
Me: ok, you ask!
Interviewer: You can tell me the basic data structure of Redis.
Me: There are 5 commonly used data structures in Redis, they are string, list, Hash, set and zset. The string data type can be used for general counting and is the most commonly used data structure; The list data type can be used as a message queue or for membership determination; Hash is the data type closest to the MySQL table structure and can be used for database caching; Set set to heavy, is the use of intersection, union, difference set and other operations, can calculate common preferences, all preferences, their own preferences and other functions; Zset can be used for leaderboard related functions.
Interviewer: Well, almost nothing. Do you know why Redis is so fast?
Me: Redis is completely memory based, avoids context switching using a single-threaded model and is IO multiplexing based on the Epoll model in Linux systems.
Interviewer: Cache penetration cache penetration cache avalanche?
I: Breakdown is to point to A single cache key concurrent access is too high, when the key expired all requests will play into the database, may let the database hang up, you can update lock, such as query requests A, found in the cache, the key to A lock, and database query data at the same time, write to the cache, and then returned to the user, This allows subsequent requests to retrieve data from the cache, or to set the hotspot data to never expire. Cache avalanche refers to a large cache failure at a certain time, for example, a large number of keys fail, which may lead to database crash. You can solve this problem by setting different expiration times for different keys to avoid the expiration of the database at the same time. If the cache does not work, the request will be sent directly to the database. In case of heavy traffic, the database may fail. In this case, a Bloom filter can be used.
Interviewer :(satisfied) yes, you can. Do you know the expiration policy and memory elimination mechanism of Redis?
Me: Redis has two main expiration deletion strategies, lazy deletion and periodic deletion. Lazy deletion means that a key is checked when it is queried and deleted if it has reached the expiration date. The disadvantage of lazy deletion is that expired keys cannot be deleted if they are not accessed and occupy memory. Periodic deletion means that Redis checks the database every once in a while to delete expired keys. Since it is impossible to poll all keys to delete them, Redis randomly selects some keys each time to check and delete them.
If redis does not delete keys in random query every time, some keys will be kept in Redis and cannot be deleted. At this time, redis memory elimination mechanism will be implemented. Volatile lru removes the least recently used key from the list of expired keys. Volatile lru removes the least recently used key from the list of expired keys. Volatile -random: selects the key that has expired at random from the key that has expired at random. Allkeys-lru: selects the least recently used key from the key for elimination. Allkeys-random: selects the least recently used key for elimination. Randomly select keys from keys for elimination, noeviction: New writes will bug when memory threshold is reached
Interviewer :(yes, young man) do you know about Redis data persistence?
Me: Redis is an in-memory database that supports persistence. Through the persistence mechanism, data in memory is synchronized to disk files to ensure data persistence. Redis data persistence has two schemes, RDB and AOF.
RDB persistence saves the state of the database at a point in time to an RDB file, which is a compressed binary file through which the state of the database at a point in time can be restored. RDB files can be generated using SAVE or BGSAVE. Advantage is fast, suitable for backup, disadvantage is that there may be data loss.
AOF persistence keeps a record of all write commands executed by the server, which can be re-executed to restore the database. The advantage is that data is not lost to the maximum extent. The disadvantage is that a large amount of logs are recorded.
Interviewer: Yeah, pretty much no problem, sentinel. I don’t know if you know about sentinel.
I: uh-huh, understand some slightly (must) modesty, also has a master-slave redis, but master-slave obviously, assuming that the master goes down, so can’t write data, then the slave will lose function, the whole architecture is unavailable, unless you manually switch, the main reason is because there is no automatic failover mechanism.
Sentinel monitors multiple master and slave servers simultaneously and automatically promotes a slave to master when the monitored master goes offline. The new master then continues to receive commands. Sentinel sends a ping command every second to all instances (both primary and secondary servers and other Sentinels) and determines if they are offline based on the response. This method is called subjective offline. When a subjective offline status is determined, it is queried by other monitoring sentinels. If more than half of the votes say it is already offline, it is marked as objective offline and failover is triggered.
Interviewer :(face kind, ha ha) good boy, I ask this first, wait for the next interviewer to test you, ha ha!
I: ok, you walk slowly!