Some suggestions for optimization
1. Try to use short keys
Of course, at the same time, don’t run out of key “by name”. Some values can be simplified, such as 0 and 1 for gender.
2. Avoid keys *
Keys *, this command is blocked, that is, no other commands can be executed in your instance while the operation is executing. When the amount of key data in redis is small to insignificant, the amount of data is bad. So we should avoid using this command. You can use SCAN instead.
3. Compress your data before saving it in Redis
Redis provides two internal encodings for each data type, and redis automatically adjusts the appropriate encodings for different situations.
4. Set the key validity period
We should use the key validity period as much as possible. For example, some temporary data (SMS checksum code), after the expiry date Redis will automatically clear for you!
Select maxmemory-policy
When the Redis instance space is full, an attempt is made to reclaim some of the keys. Depending on how you use it, volatile-lru (the default) is highly recommended — if you have timed out the key. But if you’re running something like a cache and you don’t have a timeout mechanism for keys, you might want to use allkeys-lRU. Maxmemory-samples 3 means that three keys will be randomly selected from each sample to weed out the least frequently used ones (the default option)
Maxmemory-policy has six modes:
- Volatile – lRU: LRU only for keys with expiration time (default)
- Allkeys-lru: removes infrequently used keys from allkeys
- Volatile -random: deletes expired keys randomly
- Allkeys-random: deletes randomly
- Volatile – TTL: deletes the information that is about to expire
- Noeviction: Never expired, returns errors
Use bit-level operations and byte-level operations to reduce unnecessary memory usage.
- Bit Bit level operations: GETRANGE, SETRANGE, GETBIT and SETBIT
- Byte Byte operations: GETRANGE and SETRANGE
7. Use hashes hash storage wherever possible.
8. When business scenarios do not require data persistence, turn off all persistence methods for best performance.
9. Use pipes when you want to add more than one piece of data at a time.
10, limit the memory size of redis (64-bit systems do not limit the memory, 32-bit systems default maximum 3GB memory)
If the amount of data is unpredictable and the memory is limited, try to limit the memory size used by Redis to avoid redis using swap partition or OOM error. If the memory is limited, data cannot be added when the specified memory is reached. Otherwise, an OOM error will be reported. You can set maxmemory-policy to delete data when memory runs out.
11, SLOWLOG get/reset/len
- Slowlog-log-slower than this determines how many microseconds (microsecond, 1 second = 1,000,000 microseconds) commands want to be logged.
- Slowlog-max-len Determines the maximum number of slowlog logs that can be saved. You can check which commands are responsible for slowis performance degradation.
Optimization case Analysis
Pipeline performance test
Redis pipes are not available on the command line, but redis supports pipes and can be used in Java clients (Jedis) :
The sample code
** * Initialize 1W pieces of data without pipes ** Time: 3079 mS * @throws Exception */ @test public void NOTUsePipeline() throws Exception {Jedis Jedis = jedisutil.getJedis (); long start_time = System.currentTimeMillis(); for (int i = 0; i < 10000; i++) { jedis.set("aa_"+i, i+""); } System.out.println(System.currentTimeMillis()-start_time); } /** * Use pipe to initialize 1W data * time: @throws Exception */ @test public void usePipeline() throws Exception {Jedis Jedis = jedisutil.getJedis (); long start_time = System.currentTimeMillis(); Pipeline pipelined = jedis.pipelined(); for (int i = 0; i < 10000; i++) { pipelined.set("cc_"+i, i+""); } pipelined.sync(); Println (system.currentTimemillis ()-start_time); // Run system.out.println (system.currentTimemillis ()-start_time); }Copy the code
The application of the hash
Example: We want to store a user information object data, which contains the following information: Key is the user ID, value is the user object (name, age, date of birth, etc.).
- Disadvantages of using the user ID as the lookup key and wrapping other information into an object for serialized storage: The cost of serialization/deserialization is increased, and when the Complex Adaptive system (CAS) is introduced to modify one of the information, the whole object needs to be retrieved, and the modification operation needs to protect concurrency.
- The user information object is stored as many key-value pairs as it has members, although serialization overhead and concurrency problems are eliminated, but the user ID is stored repeatedly.
- Redis Hash solves this problem by providing an interface to access the Map members directly. Key is still the user ID, value is a Map, and the Key of the Map is the attribute name of the member, and value is the attribute value. (Internal implementation: There are two different implementations of the Value of Redis Hashd. When the Hash members are small, Redis will compact store the Value in a way similar to the one-dimensional array to save memory, instead of using the real HashMap structure. The encoding of the corresponding value redisObject is zipmap, which automatically converts to the real HashMap when the number of members increases, and the encoding is HT).
Instagram Memory Optimization
You may be familiar with Instagram, the popular photo App with 300 million monthly active users. Four years ago, Instagram had more than 300 million photos and needed to solve a problem: it wanted to know the author of each photo (check user UID by image ID), and required the query speed to be the same block. If it was put into memory, it used String structure to do key-value:
HSET "mediabucket:1155" "1155315" "939"
HGET "mediabucket:1155" "1155315"
"939"Copy the code
Test: 1 million data uses up 70MB of ram, 300 million photos use up 21GB. At the time (four years ago) it was best to store it on an EC2 high-memory model (17GB or 34GB, 68GB is a waste), but not on a 16GB model.
The Instagram developers asked Pieter Noordhuis, one of the Redis developers, for an optimization and were told to use a Hash structure. The method is to segment the data and store each segment using a Hash structure.
Since the Hash structure compresses the Hash element when there is less than a certain number of Hash elements, it can save a lot of memory. This does not exist in the String structure above. This number is controlled by the hash-zipmap-max-entries parameter in the configuration file. Experiments have shown that hash-zipmap-max-entries perform better when set to 1000, and HSET command consumption becomes very high after 1000.
HSET "mediabucket:1155" "1155315" "939"
HGET "mediabucket:1155" "1155315"
"939"Copy the code
Test: 1 million consumes 16MB of memory. Total memory usage also dropped to 5GB. Of course, we can also optimize to remove the mediabucket:key and reduce its length by 12 bytes.
HSET "1155" "315" "939"
HGET "1155" "315"
"939"Copy the code
WARNING optimization at startup
When we start Redis, the following three warnings appear by default:
- Change the maximum number of TCP listeners in Linux
WARNING: The TCP backlog setting of 511 cannot be enforced because
/proc/sys/net/core/somaxconn is set to the lower value of 128.Copy the code
In high-concurrency environments you need a high backlog value to avoid slow client connection issues. Pay attention to the Linux kernel silently will reduce this value to the/proc/sys/net/core/somaxconn values, so you need to confirm increase somaxconn and tcpmaxsyn_backlog two values to achieve the desired effect.
echo 511 > /proc/sys/net/core/somaxconnCopy the code
Note: This parameter does not limit the maximum number of links redis can have. If you want to limit the maximum number of redis connections, you need to change maxClients. The default maximum number of connections is 10000
- Modify the Linux kernel memory allocation policy
WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1'Copy the code
The reason: When redis backs up data, it forks a child process. In theory, the memory occupied by the child process is the same as that of the parent process. For example, if the parent process occupies 8GB of memory, it should also allocate 8GB of memory to the child. Often, the down machine or IO load of redis server is too high, and the efficiency decreases. So the memory allocation policy should be set to 1 (indicating that the kernel allows all physical memory to be allocated regardless of the current memory state).
There are three memory allocation strategies
Optional values: 0, 1, and 2.
- 0, indicating that the kernel will check to see if there is enough memory available for the process to use. If there is enough memory available, the memory request is allowed. Otherwise, memory requests fail and errors are returned to the application process.
- 1. No matter how much memory is required, it is allowed to apply.
- 2, only the size of physical memory and swap memory are allowed to be allocated (swap memory is generally half of physical memory).
- Transparent Huge Pages(THP)
THP may cause memory locks and affect redis performance. You are advised to disable THP
Transparent HugePages: Used to improve the performance of memory management Transparent Huge Pages in 32 bit RHEL 6 is not supported by the executive command echo never > / sys/kernel/mm/transparent_hugepage/enabled Add this command to the file /etc/rc.localCopy the code
The original source
xiaoxiaomo -> http://blog.xiaoxiaomo.com/2016/05/02/Redis-%E4%BC%98%E5%8C%96%E8%AF%A6%E8%A7%A3/Copy the code