Wechat search attention to “water drop and silver bullet” public account, the first time to obtain high-quality technical dry goods. 7 years of senior back-end research and development, to show you a different technical perspective.

Hi, I’m Kaito.

In this article, I want to talk to you about some of the pitfalls you might face when using Redis.

If you’ve encountered any of the following “weird” situations while using Redis, chances are you’ve stepped into a pit:

  • If a key is set to expire, how can it not expire?
  • Using O(1) SETBIT command, Redis is in OOM?
  • Execute RANDOMKEY, pull out a key randomly, will block Redis?
  • The same command, why can’t the master database check data, but can check data from the slave database?
  • Why does the slave library use more memory than the master library?
  • Why is the data written to Redis lost?
  • .

What causes these problems?

In this article, I’m going to take a look at the pitfalls you might step on when using Redis, and how to avoid them.

I’ve divided these questions into three main sections:

  1. What are the pitfalls of common commands?
  2. What are the pitfalls of data persistence?
  3. What pits are there for master/slave synchronization?

The causes of these problems are likely to “upend” your perception, so if you’re ready, follow my lead.

This article has a lot of dry material. I hope you can read it patiently.

What are the pitfalls of common commands?

First, let’s take a look at some of the common commands that get “unexpected” results when using Redis.

1) Is the expiration time accidentally lost?

When you use Redis, you’ll probably use the SET command a lot. It’s pretty simple.

In addition to setting key-value, a SET can also SET the expiration time of a key, as follows:

127.0.0.1:6379> SET testkey val1 EX 60
OK
127.0.0.1:6379> TTL testkey
(integer) 59
Copy the code

If you want to change the value of the key by using the SET command without the expiration time parameter, the expiration time of the key will be “erased”.

127.0.0.1:6379> SET testkey val2 OK 127.0.0.1:6379> TTL testkey // will never expire! (integer) -1Copy the code

See? Testkey now never expires!

If you’ve just started using Redis, you’ve probably stepped into this hole too.

If the SET command does not SET an expiration time, Redis will automatically “erase” the expiration time of the key.

If you find that Redis memory continues to grow, and many keys have expiration dates that are lost, there is a good chance that this is the cause.

In this case, you will have a large number of non-expired keys in Redis, consuming too much memory resources.

Therefore, if you SET the expiration time at the beginning of the SET command, then modify the key, also be sure to add the expiration time parameter, to avoid the expiration time loss problem.

2) How can DEL block Redis?

To delete a key, you must use the DEL command. I wonder if you haven’t thought about its time complexity?

The O (1)? Not necessarily.

If you read the official Redis documentation carefully, you’ll see that the time it takes to delete a key depends on the type of key it is.

The Redis documentation describes the DEL command as follows:

  • Key is a String, DEL time is O(1)
  • Key is a List/Hash/Set/ZSet type. DEL time complexity is O(M). M is the number of elements

That is, if you want to delete a key that is not a String, the more elements in the key, the longer it will take to execute the DEL!

Why is that?

The reason is that to remove this key, Redis needs to release memory for each element in turn, and the more elements, the more time this process takes.

Such a long operation is bound to block the entire Redis instance and affect Redis performance.

When deleting a List/Hash/Set/ZSet key, do not mindlessly execute a DEL. Instead, do the following:

  1. To query the number of elements, run the LLEN/HLEN/SCARD/ZCARD command
  2. Determine the number of elements: If the number of elements is small, run DEL directly to delete them; otherwise, delete them in batches
  3. Batch deletion: Run the LRANGE/HSCAN/SSCAN/ZSCAN + LPOP/RPOP/HDEL/SREM/ZREM command to delete the nodes in batches

Delete a String key from a List/Hash/Set/ZSet key.

Ah? Delete a key of type String in O(1) time. This won’t cause Redis to block, will it?

In fact, this is not necessarily!

If you think about it, what if this key takes up a lot of memory?

For example, if this key stores 500MB of data (obviously, it is a Bigkey), it will still take longer to execute DEL!

This is because it takes time for Redis to release this amount of memory to the operating system, so the operation takes longer.

So, for strings, you’d better not store too much data either, or you’ll have performance problems when you delete it.

At this point, you might be thinking: Isn’t Redis 4.0 lazy-free? If you turn this on, freeing memory will be done in a background thread, so the main thread will not block.

That’s a very good question.

Could this really be the case?

I’ll tell you the conclusion: even though Redis has lazy-free enabled, when a String bigkey is deleted, it is still processed in the main thread, not in the background thread. So, there is still a risk of blocking Redis!

Why is that?

If you are interested in lazy free, you can check the lazy-free data to find the answer. 🙂

There are a lot of lazy free topics, so I’m going to write an article about them later

3) RANDOMKEY also blocks Redis?

If you want to randomly view a key in Redis, you usually use RANDOMKEY.

This command will pull a “random” key from Redis.

Since it’s random, this must be pretty fast, right?

It’s not.

To explain this, consider Redis’s expiration policy.

If you are familiar with the Redis expiration strategy, you should know that Redis uses a combination of timed cleanup and lazy cleanup to clear expired keys.

RANDOMKEY, on the other hand, checks to see if the key has expired after taking out a RANDOMKEY.

If the key is out of date, Redis will delete it, which is lazy cleanup.

But that doesn’t stop Redis from finding an “expired” key and returning it to the client.

At this point, Redis will continue to pick up a random key and determine whether it is expired or not until it finds an unexpired key and returns it to the client.

Here’s how it works:

  1. The master randomly fetches a key and checks whether it has expired
  2. If the key has expired, delete it and continue fetching keys at random
  3. The loop repeats until an unexpired key is found, and returns

However, there is a problem: if a large number of keys in Redis have expired, but have not been cleaned up, the cycle will continue for a long time to end, and this time is spent cleaning up expired keys + finding non-expired keys.

As a result, RANDOMKEY takes longer to execute, which affects Redis performance.

The above process is actually performed on the Master.

The problem is even worse if RANDOMEKY is executed on the slave!

Why is that?

The main reason is that the slave itself does not clean expired keys.

When does slave delete an expired key?

In fact, when a key is about to expire, the master deletes it first, and then sends a DEL command to the slave to tell the slave to delete the key as well, thus achieving data consistency between the master and slave libraries.

It’s the same scenario: Redis has a large number of keys that have expired but have not been cleaned, so when executing RANDOMKEY on slave, the following problem occurs:

  1. Slave Retrieves a key randomly and checks whether it has expired
  2. The key has expired, but slave does not delete it and continues to randomly search for non-expired keys
  3. Since a large number of keys have expired, the slave cannot find the key that matches the conditions, which leads to an “endless loop” **!

That is, executing RANDOMKEY on a slave could cause the entire Redis instance to freeze!

Didn’t think of that? How could a random key on a slave possibly cause such serious consequences?

This is actually a Bug in Redis that was not fixed until 5.0.

The fix is that when executing RANDOMKEY on a slave, it determines whether all the keys in the entire instance are set to expire. If so, to avoid finding a key for a long time, slave will only search the hash table 100 times at most. They all exit the loop.

The solution is to increase the maximum number of retries, so that you don’t get stuck in an endless loop.

Although this solution avoids the problem of the slave getting stuck in an infinite loop and the entire instance, executing this command on the master may still take longer.

So, if you see a “wobble” in Redis when you use RANDOMKEY, it’s probably because of that!

4) O(1) SETBIT will cause Redis OOM?

When using the Redis String type, instead of writing a String directly, you can use it as a bitmap.

In particular, we can split a String key into bits, as follows:

127.0.0.1:6379> SETBIT testkey 10 1
(integer) 1
127.0.0.1:6379> GETBIT testkey 10
(integer) 1
Copy the code

Each bit of the operation is called offset.

However, there is a pit that you need to be aware of.

If the key does not exist, or if the memory usage of the key is small and the offset you are operating on is very large, then Redis will need to allocate “more memory”, which will take longer and affect performance.

Therefore, when using SETBIT, you must also pay attention to the size of offset, too large offset will cause Redis lag.

In addition to the performance impact of allocating memory for this type of key, which is typical of BigKeys, deleting it also takes longer.

5) Executing MONITOR also causes Redis OOM?

I’m sure you’ve heard a lot about this pit.

When you execute a MONITOR command, Redis writes each command to the client’s “output buffer” from which the client reads the results returned by the server.

However, if your Redis QPS is high, this will cause the output buffer to continue to grow and consume a large amount of Redis memory resources. If your machine is running low on memory resources, the Redis instance will be at risk of getting OOM.

So you need to be careful with MONITOR, especially if the QPS is high.

All of the above problem scenarios occur when we use common commands, and are most likely triggered “accidentally”.

Redis “data persistence” has some drawbacks.

What are the pitfalls of data persistence?

Redis data persistence, divided into RDB and AOF two ways.

Where RDB is a snapshot of the data, AOF records every write command to a log file.

Problems with data persistence are also concentrated in these two chunks, and we’ll look at them one by one.

1) Master crashes and slave data is lost?

Data loss occurs if your Redis is deployed in the following mode:

  • Master-slave + sentinel deployment instance
  • The data persistence function is disabled for the master
  • Redis processes are managed by Supervisor and configured to restart automatically when the process is down.

If the master goes down at this point, the following problems will occur:

  • The master process breaks down and is automatically started by the Supervisor before the sentry initiates the switchover
  • But the master doesn’t have any data persistence enabled, and it starts up as an “empty” instance
  • In order to be consistent with the master, the slave automatically “cleans” all data in the instance and becomes an “empty” instance

See? In this scenario, all master/slave data is lost.

In this case, when a business application accesses Redis and finds that there is no data in the cache, it will send all requests to the back-end database. This will further cause a “cache avalanche”, which has a great impact on the business.

So, you must avoid this situation, my advice to you is:

  1. Redis instances are pulled up automatically without using process management tools
  2. If the master is down, have the sentry initiate a switch to promote the slave to master
  3. After the switchover is complete, restart the master to degenerate into a slave

You want to avoid this problem when you configure persistence.

2) Does AOF Everysec really not block the main thread?

When Redis enables AOF, you need to configure a disk flushing policy for AOF.

With a balance of performance and data security, you would definitely use a solution like Appendfsync Everysec.

This works by flushing the AOF page cache to disk (fsync) every second by the background thread of Redis.

The advantage of this scheme is that the time-consuming operation of AOF disk brushing is carried out in the background thread, avoiding the impact on the main thread.

But does it really not affect the main thread?

The answer is no.

In fact, there is a scenario where the Redis background thread executing the AOF Page cache flush (FYSNC) will block the fsync call if the disk IO load is too high.

At this point, the main thread still receives write requests, so the main thread will first determine whether the last background thread flush successfully.

How do you tell?

The background thread records the flush time after the flush succeeds.

The main thread will use this time to determine how long it has been since the last flush. The process goes like this:

  1. The main thread checks whether the background fsync is complete before writing the AOF page cache (write system call).
  2. Fsync is complete and the main thread writes directly to the AOF page cache
  3. Fsync is not completed. How long has it been since the last fsync?
  4. If it is less than 2 seconds since the last FYSNC success, the main thread returns directly without writing to the AOF page cache
  5. If more than 2 seconds have passed since the last fYSNC success, the main thread forces the AOF page cache (write system call).
  6. The background thread FynSC will block when writing to the AOF page cache because of the high disk I/O load. The main thread fynSC will block when writing to the AOF Page cache. (Fsync and write operations are mutually exclusive.

Through analysis we can see that even if you configure the AOF flush policy as appendfsync everysec, there is still a risk of blocking the main thread.

Fynsc blocks due to high disk I/O load, which in turn blocks the main thread writing to the AOF page cache.

Therefore, you must ensure that the disk has sufficient IO resources to avoid this problem.

3) Does AOF Everysec really only lose 1 second of data?

Then continue the analysis of the above problem.

As mentioned above, we need to focus on step 4 above.

If it is less than 2 seconds since the last FYSNC succeeded, the main thread returns to the AOF page cache and does not write to the AOF page cache.

This means that the main thread will wait up to 2 seconds before writing to the AOF page cache while the background thread performs a fsync flush.

If Redis goes down at this point, then 2 seconds of data is lost in the AOF file, not 1 second!

Why does the Redis main thread wait 2 seconds to write to the AOF page cache?

When Redis AOF is set to appendfsync everysec, normally the background thread will perform fsync flush every second and will not block if the disk resources are sufficient.

In other words, the main Redis thread does not care whether the background thread has flushed successfully or not, as long as the AOF page cache is written mindlessly.

However, the Redis authors consider that the background thread fsync is at risk of blocking if disk I/O resources are tight.

So, before the main thread writes to the AOF page cache, the Redis author checks the time since the last fsync success. If it fails in more than one second, the main thread will know that fsync may be blocked.

Therefore, the main thread will wait 2 seconds to write to the AOF page cache.

  1. Reduce the risk of main thread blocking (the main thread will block immediately if the AOF page cache is written mindlessly)
  2. If fsync blocks, the main thread gives the background thread one second to wait for fsync to succeed

The trade-off is that if an outage occurs at this point, AOF loses two seconds of data instead of one.

This solution should be a further trade-off between performance and data security for Redis authors.

Anyway, all you need to know here is that even though AOF is configured to flush every second, AOF actually loses two seconds of data in the extreme case described above.

4) When RDB and AOF rewrite, Redis happens OOM?

Finally, let’s look at what happens when Redis performs RDB snapshots and AOF rewrite.

When Redis does RDB snapshots and AOF rewrite, it creates child processes to persist data from instances to disk.

To create a child process, the operating system fork function is called.

Fork After the fork execution is complete, the parent and child processes share the same memory data.

However, the main process can still receive Write requests, and the incoming Write requests will use Copy On Write mode to manipulate memory data.

In other words, when the main process has data that needs to be modified, Redis does not directly modify the data in the existing memory. Instead, Redis copies the data in the new memory first and then modifies the data in the new memory. This is called “copy-on-write”.

Copy on write you can also say, whoever needs to write, copy first and then modify.

As you can see, if the parent process wants to change a key, it needs to copy the original memory data into the new memory. This process involves requesting “new memory”.

If your business is characterised by “write more than read less” and OPS is very high, then between RDB and AOF rewrite a lot of memory copying is going on.

What’s the problem with that?

Because there are so many write requests, this can cause the Redis parent process to request too much memory. During this time, the wider the range of key changes, the more new memory requests.

If your machine is running low on memory resources, this will cause Redis to be at risk of getting OOM!

This is why you will hear from DBA students about reserving memory for Redis machines.

The aim is to avoid the Redis OOM between RDB and AOF rewrite.

These are the pitfalls of “data persistence”. How many pitfalls have you encountered?

Let’s look at some of the problems with master-slave replication.

What pits are there for master slave replication?

In order to ensure high availability, Redis provides a master-slave replication mode, which ensures that there are multiple “copies” of Redis. When the master library goes down, we still have a slave library to use.

During master-slave synchronization, there are still many pits, which we will look at in turn.

1) Does master/slave replication lose data?

First, you need to know that master/slave replication in Redis is “asynchronous”.

This means that if the master suddenly goes down, some data may not be synchronized to the slave.

What problems does this cause?

If you use Redis as a pure cache, there is no business impact.

Service applications can query data that has not been synchronized from the master to the slave in the back-end database.

However, for services that use Redis as a database or as a distributed lock, data loss/lock loss may occur due to asynchronous replication problems.

More details on Redis distributed lock reliability are not covered here, but will be dissected in a separate article later. The only thing you need to know is that there is a probability of data loss in Redis master-slave replication.

2) Run the same command to query a key, but the master and slave libraries return different results.

If a key has expired, but the key has not been cleared by the master, what will be the result of querying the key on the slave?

  1. Slave Normally returns the key value
  2. Slave returns NULL

Which do you think it is? Think about it.

The answer is: not necessarily.

Huh? Why not?

This question is very interesting, please stay close to my thoughts, I will take you step by step to analyze the reasons.

In fact, what result is returned depends on three factors:

  1. The version of Redis
  2. Command to be executed
  3. The machine clock

Let’s start with the Redis version.

If you are using Redis below 3.2, query the key on the slave and it will always return value to you as long as the key has not been cleaned by the master.

That is, the key can still be queried on the slave even if it has expired.

127.0.0.1:6479> TTL testkey (integer) -2 // Expired 127.0.0.1:6479> GET testKey "testVal" // Redis 2.8 runs 127.0.0.1:6479> TTL testkey (integer) -2 // Has expired 127.0.0.1:6479> GET testval // Can also be queried!Copy the code

If the key is out of date, it will be cleaned up and NULL will be returned.

127.0.0.1:6379> TTL testkey (integer) -2 127.0.0.1:6379> GET testkey (nil)Copy the code

Do you see that? Query the same key on the master and slave, and the result is different.

In fact, the slave should be the same as the master. If the key has expired, it should return NULL to the client, rather than return the value of the key as normal.

Why does this happen?

In fact, this is a Bug of Redis: in the following version of Redis, when querying a key on the slave, it does not determine whether the key is expired, but directly returns the result to the client mindless.

This Bug was fixed in version 3.2, but it was “not completely fixed.”

What do you mean “less than complete” restoration?

This is explained in conjunction with the second factor mentioned earlier: the specific command executed.

Redis 3.2, while fixing this Bug, misses a command: EXISTS.

That is to say, a key has expired, the slave query its data directly, for example, perform the GET/LRANGE/HGETALL/SMEMBERS/ZRANGE such command, slave returns NULL.

However, if EXISTS is executed, slave still returns the key EXISTS.

127.0.0.1:6479> GET testKey (nil) // The key is logically expired 127.0.0.1:6479> EXISTS testkey (integer) 1 // There are!Copy the code

The reason is that EXISTS does not use the same method as the command to query data.

The Redis author only adds an expiration date to the query, but the EXISTS command still does not.

It wasn’t until Redis 4.0.11 that the missing Bug was fully fixed.

If you use a version above this, performing a data query or EXISTS on a slave will return “nonexistent” for expired keys.

To summarize, the slave query for expired keys goes through three stages:

  1. 3.2 In the following versions, the key expires and is not cleared. No matter which command is used to query slave, the value is returned normally
  2. In version 3.2 to 4.0.11, NULL is returned for query data, but true is still returned in EXISTS
  3. 4.0.11 In the later version, all commands have been repaired. When the expired key is queried on the slave, “Does not exist” is returned.

Special thanks should be given to Fu Lei, the author of Redis Development and Operation.

I saw this problem in his article, which felt very interesting. It turned out that Redis still had such a Bug before. Then I looked up the relevant source code, and the logic of the comb, here to write an article to share with you.

Although I have thanked him personally in wechat, I would like to express my thanks to him again

Finally, let’s look at the third factor that affects query results: the “machine clock.”

Assuming we have already circumvented the bugs mentioned above, for example, if we use Redis 5.0 to query a key on the slave, will the result be different from that on the master?

The answer is, it probably will.

This has to do with the master/slave machine clock.

Both master and slave determine whether a key is expired based on the local clock.

If the slave’s clock is running “faster” than the master’s, this will result in the client returning NULL when querying the slave, even though the key has not expired from the slave’s perspective.

Isn’t that interesting? A small expired key, unexpectedly hide so many tricks.

If you’re in a similar situation, check out the steps above to see if you’ve stepped in the hole.

3) Does master/slave switchover cause cache avalanche?

This question is an extension of the previous one.

Let’s assume that the slave’s machine clock runs “faster” than the master’s, and “much faster.”

At this point, from the slave point of view, the data in Redis is “massively outdated”.

If you perform Master/Slave Switchover, promote the slave to the new master.

When it becomes master, it starts cleaning up a lot of expired keys, which results in the following:

  1. The master deletes a large number of expired keys. As a result, the main thread is blocked and cannot process client requests in a timely manner
  2. A large amount of data in Redis is out of date, causing a cache avalanche

You see, when the master/slave machine clock is seriously inconsistent, the impact on the service is very big!

Therefore, if you are DBA operations, it is important to ensure that the master and slave machine clocks are consistent to avoid these problems.

4) Large amount of data inconsistency between master and slave?

In another scenario, the master/slave data has a large number of inconsistencies.

This involves Redis’ maxMemory configuration.

Redis’s MaxMemory can control the maximum memory usage of an entire instance, beyond which an instance starts to flush data when a flushing policy is configured.

However, there is a problem: if the maxMemory of the master/slave configuration is different, then data inconsistency will occur.

For example, if the master’s maxmemory is 5G and the slave’s maxmemory is 3G, when the data in Redis exceeds 3G, the slave will start to flush out data “in advance”, causing data inconsistency between the master and slave libraries.

In addition, even if the master/slave sets the same maxmemory, be careful if you adjust the upper limit of both, otherwise it will cause the slave to flush out data:

  • When maxMemory is increased, adjust the slave first and then the master
  • When reducing maxMemory, adjust master first and then slave

Operating in this way avoids the problem of slave overrunning maxMemory in advance.

In fact, if you think about it, what’s the key to these problems?

The fundamental reason for this is that when a slave exceeds maxMemory, it “automatically” weeds out data.

Can these problems be avoided if slave is not allowed to eliminate the data itself?

That’s right.

Redis officials have probably received a lot of feedback on this issue. With Redis 5.0, the authorities have finally fixed this problem for good!

Redis 5.0 adds a configuration item: Replica-ignore-maxmemory, default yes.

This parameter indicates that even though the slave memory exceeds maxmemory, it does not flush data by itself!

In this way, the slave will always keep up with the master, and will only copy the data sent by the master.

At this point, the master/slave data is guaranteed to be exactly the same!

If you happen to be using version 5.0, you don’t have to worry about this.

5) Does slave have memory leak problem?

Yes, you read that right.

How did this happen? Let’s look at it in detail.

Slave memory leaks are triggered when you use Redis in the following scenarios:

  • Redis uses versions 4.0 and below
  • Slave configuration item read-only=no (writable from the slave library)
  • A key with an expiration date was written to the slave

In this case, the slave will suffer memory leaks: the key in the slave will not be cleaned automatically even when it expires.

If you don’t delete it, the keys will remain in slave memory, consuming slave memory.

The most troublesome part is that you can query these keys with commands and still get no results!

This is the slave “memory leak” problem.

This was also a Bug in Redis, which was fixed in Redis 4.0.

The solution is that when keys are written to a writable slave with an expiration date, the slave “records” those keys.

The slave then periodically scans these keys and cleans them up if they reach their expiration date.

If your business needs to temporarily store data on the slave, and these keys are set to expire, then this is an issue to be aware of.

You need to confirm your version of Redis, and if it is under 4.0, avoid this pit.

In fact, the best solution is to establish a Redis usage specification, slave must be forcibly set to read-only, do not allow write, this can not only ensure data consistency of master/slave, but also avoid slave memory leakage problem.

6) Why does master/slave full synchronization always fail?

During master/slave full synchronization, you may encounter synchronization failure in the following scenarios:

The slave sends a full synchronization request to the master. The master generates an RDB and sends it to the slave. The slave loads the RDB.

Due to the large size of the RDB data, the slave load takes a long time.

At this point, you will find that the slave has not finished loading the RDB, but the connection between the master and slave is disconnected, and the data synchronization fails.

Then you will see that the slave initiates a full synchronization, and the master generates an RDB to send to the slave.

Similarly, when the SLAVE loads the RDB, the master/slave synchronization fails again and again.

What’s going on here?

In fact, this is Redis “copy storm” problem.

What is a replication storm?

As just described: master/slave full synchronization fails, synchronization restarts, synchronization fails again, and so on, in a vicious cycle that continues to waste machine resources.

Why does this cause problems?

This problem can occur if your Redis has the following characteristics:

  • The master instance data is too large, and the SLAVE takes too long to load the RDB. Procedure
  • The slave client-output-buffer-limit is too small. Procedure
  • The master has a large number of write requests

During full data synchronization, the master first writes the write request to the master/slave replication buffer. The upper limit of the buffer is determined by the configuration.

When the SLAVE loads the RDB too slowly, the slave cannot read the data from the “replication buffer” in a timely manner, causing the replication buffer to “overflow”.

To prevent memory growth, the master forcibly disconnects the slave, and full synchronization fails.

The slave that failed to synchronize would then “restart” the full synchronization, causing the problem described above to repeat itself in a vicious cycle known as a “replication storm.”

How to solve this problem? Let me give you the following suggestions:

  1. Redis instances should not be too large to avoid large RDB
  2. Set the replication buffer as large as possible to allow sufficient time for the slave to load the RDB and reduce the probability of full synchronization failure

If you’re in the same hole, there’s a solution.

conclusion

To summarize, this article focuses on Redis in “command use”, “data persistence”, “master/slave synchronization” three possible “pits”.

How’s that? Does it change your perception?

This article is quite informative. If your mind is already a little “messy”, don’t worry. I have prepared a mind map for you to better understand and memorize.

Hope you can avoid these pits in advance when using Redis, so that Redis can provide better service.

Afterword.

Finally, I would like to talk with you about the experience and experience of stepping pits in the development process.

In fact, contact with any new field, will go through the unfamiliar, familiar, step on the pit, absorb experience, play with ease these stages.

So how do you reduce the number of pits at this stage? Or how to efficiently troubleshoot problems after stepping on pits?

Here are four areas that should help you:

1) Read the official documentation + configuration file comments

Be sure to read the official documentation, as well as the notes in the configuration file. In fact, there may be a lot of risk, excellent software will be in the documentation and notes to remind you, read carefully, can avoid many basic problems in advance.

2) Ask questions about details and why?

Always be curious. When encountering problems, master the ability of peeling silk and cocooning, and gradually locate problems, and always keep the mentality of exploring the essence of things.

3) Dare to question, source code does not lie

If you think a problem is weird, maybe it’s a Bug, don’t be afraid to question it.

Finding the truth of a problem through the source code is a much better way than reading a hundred articles copied from each other on the Internet (which are probably all wrong).

4) There is no perfect software. Excellent software is iterative

Any good software is a step by step iteration. Bugs are normal during iteration, and we need to keep them in perspective.

These experiences can be applied to any field of study and I hope they will be of help to you.

Want to read more hardcore technology articles? Welcome to follow my public account”Water drops and silver bullets”.

I’m Kaito, a veteran backend programmer who thinks about technology. In my article, I will not only tell you what a technology point is, but also tell you why. I’ll also try to distill these thought processes into general methodologies that you can apply to other fields.