I’m Redis. I was brought into this world by a man named Antirez.
“Wake up! Wake up!” Dimly I heard someone calling me.
Slowly open your eyes, the original next to MySQL brother.
“How did I fall asleep?
“Hey, did you just make a mistake? The whole process crashed! I hate a lot of query requests!” , MySQL says.
Just wake up, the brain is still a little confused, MySQL brother help me up to continue to work.
“No! All my cached data is gone!”
“WTF? Don’t you persist?” MySQL brother a listen to the facial expression all changed.
I shook my head in embarrassment. “I saved it in memory, so it was so fast.”
“That also can save on the hard disk, encounter this kind of situation all over again to build cache, this is not a waste of time!”
I nodded. “Let me figure out how to do this persistence.”
RDB persistence
Within days, I came up with a plan: RDB
Since all my data is in memory, the easiest thing to do is go through it and write it all to a file.
In order to save space, I defined a binary format, the data one bar code together, to generate an RDB file.
However, I have a large amount of data, and it would take me a long time to back up all of it, so I can’t do it too often, otherwise I won’t have to do anything else and just take the time to back up.
Also, if there is no write operation, it is always read operation, then I don’t have to duplicate backup, waste time.
After much thought, I decided to provide a configuration parameter that would both support periodic backups and avoid futzing.
Something like this:
- Save 900 1 # 900 seconds (15 minutes) with 1 write
- Save 300 10 # 300 seconds (5 minutes) with 10 writes
- Save 60 10000 # 10000 writes in 60 seconds (1 minute)
Multiple conditions can be used in combination, and as long as one of the above conditions is met, I will do a backup.
On second thought, THIS still doesn’t work, I have to fork out a sub-process to do it, I can’t waste my time.
With backup files, the next time I encounter a crash exit, or even a server power outage strike, as long as my backup file is still there, I can read at startup time, quickly restore the previous state!
MySQL:binlog
With this scheme, I excitedly showed it to MySQL Brother, expecting him to give me some encouragement.
“Brother, you have some problems with this plan”, unexpectedly, he poured a basin of cold water to me.
“The problem? What’s the problem?”
“You see ah, you this periodic backup, cycle or minute level, do you know our service every second to respond to how many requests, like you must not lose how much data?” MySQL is serious about it.
I was a little short of breath. “However, this backup has to traverse all the data at one time, and the overhead is quite large, so it is not suitable for high frequency execution.”
“Who told you to run through all the data at once? Come on, let me show you something “, MySQL big brother took me to a file directory:
- mysql-bin.000001
- mysql-bin.000002
- mysql-bin.000003
- …
“Look, these are my binary logs, binlogs, and can you guess what they contain?” MySQL says, pointing to the pile of files.
I looked at it. It was a bunch of binary data, which I couldn’t make sense of. I shook my head.
“It records all the changes I make to the data, such as INSERT, UPDATE, DELETE, etc., which will be useful when I need to restore the data.”
Listen to him so say, I come down inspiration! Farewell MySQL big brother, back to research a new scheme.
AOF persistence
As you know, I’m also command-based, and my day job is to respond to command requests from business applications.
When I got back, I decided to follow MySQL’s lead and write all my write commands to a File and give this persistence method a name: AOF (Append Only File).
But I ran into the same problem with the RDB scheme. How often should I write files?
I certainly can’t record every write command in a file, that would seriously drag down my performance! I decided to prepare a buffer, called aof_buf, where I would temporarily store the commands I wanted to record until I wrote them to a file.
I tried it and found that the data had not been written into the file. Multiple inquiries only to know that the original operating system also has a cache area, I wrote the data was his cache up, did not write to me in the file, this is not pit dad!
It seems that I have to refresh the data after I finish writing it. To really write down the data, I will provide a parameter and let the business program set when to refresh it.
The appendfsync parameter has three values:
- Always: Synchronously refreshes data once every event cycle
- Everysec: Synchronously refresh every second
- No: Why don’t I just write and let the operating system decide when to actually write
AOF rewrite
This time I wasn’t as impulsive as before, so I decided to run it for a while before I told MySQL about it, so I wouldn’t get stung again.
I tried it out for a while and everything worked fine, but I found that as time went on, the AOF backup file I wrote got bigger and bigger! Not only does it take up a lot of hard disk space, but copying, moving, and loading analysis are cumbersome and time-consuming.
I had to find a way to compress the file, a process I called AOF rewriting.
At first, I was going to analyze the original AOF file and trim it down by removing the redundant instructions, but I quickly gave up the idea because it was too much work and cumbersome to analyze, wasting a lot of energy and time.
It’s really stupid to record the data one by one. There are many intermediate states that are useless. Why don’t I just record the final data states?
Such as:
- RPUSH name_list ‘Programming Technology Universe’
- RPUSH name_list ‘Play programming smartly’
- RPUSH name_list ‘Backend Technology School’
Can be combined into one:
- RPUSH name_list ‘Programming Technology Universe’ ‘Play Programming Smartly’ ‘Backend Technology School’
I had an idea for the AOF file rewrite, but it was still time consuming, so I decided to fork out a sub-process to do it the same way RDB does.
Careful as I found that after doing so, the child process during the rewrite, if I modify the data, there will be and rewrite the content of the inconsistent situation! MySQL is sure to pick holes, I have to fix this bug.
So, IN addition to the previous AOF_buf, I prepared another buffer: the AOF rewrite buffer.
From the moment I create the overwrite child, I write a copy of the subsequent write commands into the overwrite buffer. After the child overwrites the AOF file, I write the commands from the buffer into the new AOF file.
Finally, rename the new AOF file to replace the original bloated big file, and you’re done!
After making sure I had no problem with my ideas, I went back to MySQL again with a new solution. I had done this, and this time, he must have nothing to say, right?
MySQL big brother looked at my solution with a satisfied smile, just asked a question:
This AOF scheme is so good, can not RDB scheme?
Unexpectedly, he asked me this question, I unexpectedly immersed in meditation, do you think I should answer well?
eggs
“Why are you falling apart again?”
“Sorry, there’s a bug again, but don’t worry, I can quickly recover now!”
“That old crash is not a thing, you only have one instance too unreliable, go to find a few helpers!”
For more details, please pay attention to the following highlights