This is the 23rd day of my participation in the August More Text Challenge
This article briefly introduces some performance optimization strategies commonly used in back-end service development.
1, code,
Optimizing code implementations is a priority, especially if they are unreasonably complex. If the combination of requirements can solve the problem from the point of view of code implementation using a more efficient algorithm or solution implementation, that is the simplest and most effective.
2. Database
Database optimization generally has three aspects:
1) SQL tuning: In addition to mastering the basic optimization methods of SQL, use slow log to locate specific problems in SQL, and use explain, profile and other tools to gradually tune.
2) Connection pool tuning: Select an efficient and applicable connection pool, make a comprehensive judgment based on the current connection pool principle, specific connection pool monitoring data and current traffic volume, and obtain the final tuning parameters through repeated debugging.
3) Architecture level: including read/write separation, master/slave library load balance, horizontal and vertical library and table, etc., generally require major changes, which need to be considered comprehensively from the overall architecture.
3, caching,
classification
The local Cache (HashMap/ConcurrentHashMap, Ehcache, RocksDB, Guava Cache, etc.).
Cache services (Redis/Tair/Memcache, etc.). Series of cache tutorial please pay attention to the public number Java technology stack reading, are actual combat dry goods.
Key Points of design
1. When do you update the cache? How to ensure the reliability and real-time of updates?
The cache update policy needs to be analyzed on a case-by-case basis. There are two basic update strategies:
1) Receive the message of change and update it in quasi-real time.
2) Set an expiration time of 5 minutes for each cache data, load it from DB and then set it back to DB after expiration. This policy is a powerful supplement to the first policy, to solve the manual change DB does not send messages, receive message update program temporarily error caused by the problem of the first policy invalid. Through this double insurance mechanism, the reliability and real-time of cached data are effectively guaranteed.
2. Will the cache be full?
For a caching service, in theory, as more and more data is cached, the cache will eventually be full with limited capacity. How to deal with it?
1) For the cache service, choose the appropriate cache eviction algorithm, such as the most common LRU.
2) Set an appropriate warning value for the current capacity, such as 10G cache. When the cache data reaches 8G, an alarm will be issued to troubleshoot problems or expand capacity in advance.
3) For some keys that do not need to be stored for a long time, try to set the expiration time.
3. Can cache be lost? What if I lose it?
Determine whether to allow loss based on service scenarios. If not, you need a caching service with persistence, such as Redis or Tair. In more detail, depending on the business’s tolerance for lost time, you can also choose a more specific persistence strategy, such as Redis’s RDB or AOF.
The cache problem
1. Cache penetration
Description: Cache penetration refers to data that is neither in the cache nor in the database, and the user continuously initiates requests, such as data with id “-1” or data with an id that is too large to exist. In this case, the user is likely to be the attacker, and the attack will cause the database to be overburdened.
Solution:
1) Add verification on the interface layer, such as user authentication verification, id basic verification, id<=0 direct interception;
2) If the data cannot be cached or retrieved from the database, the key-value pair can also be written to key-null, and the cache validity period can be set to a shorter point, such as 30 seconds (setting too long will cause it to be unavailable even under normal circumstances). This prevents the attacking user from using the same ID repeatedly for violent attacks.
2. Cache breakdown
Description: Cache breakdown refers to that there is no data in the cache but there is data in the database (generally, the cache time expires). At this time, because there are too many concurrent users, they read the data in the cache at the same time, and at the same time they go to the database to fetch the data, causing the database pressure to increase suddenly, resulting in excessive pressure.
Solution:
1) Set the hotspot data to never expire.
2) Add mutex, which is a common practice in the industry. In simple terms, when the cache fails, instead of loading the db immediately, the cache tool will first set a mutex key with the return value of the successful operation (such as Redis SETNX or Memcache ADD). When the return value of the successful operation is returned, Load DB and set the cache; Otherwise, retry the entire GET cache method. Code similar to the following:
public String get(key) { String value = redis.get(key); If (value == null) {if (value == null) {if (value == null) { If (redis.setnx(key_mutex, 1, 3 * 60) == 1) {value = db.get(key); redis.set(key, value, expire_secs); redis.del(key_mutex); } else {sleep(50); sleep(50); sleep(50); get(key); }} else {return value; }}Copy the code
Cache avalanche
Description: Cache avalanche refers to that a large amount of data in the cache reaches the expiration time, while a large amount of query data causes the database to be overburdened or even down.
Unlike a cache breakdown, where a cache breakdown is a concurrent search for the same data, a cache avalanche is when different data is out of date and a lot of data is not available to search the database.
Solution:
1) The expiration time of cache data is set randomly to prevent a large number of data expiration at the same time.
2) If the cache system is distributed, distribute hotspot data evenly among different cache nodes.
3) Set the hotspot data to never expire.
4. Cache update
Cache Aside pattern: This is the most commonly used pattern. Its specific logic is as follows:
Invalid: The application retrieves data from the cache. If the application fails to retrieve data, it retrieves data from the database and puts it in the cache.
Hit: An application retrieves data from the cache and returns it.
Update: Save the data to the database, and then invalidate the cache after success.
4, asynchronous
Usage scenarios
For some client requests, the server may need to do additional things for these requests that the user does not care about or need to know the result of immediately. In this case, it is better to handle them in an asynchronous manner.
role
Benefits of asynchronous processing:
1) Shorten the interface response time, quickly return user requests, better user experience.
2) Avoid threads in the running state for a long time, which will cause the available threads of the service thread pool to be insufficient for a long time, and then cause the length of the thread pool task queue to increase, thus blocking more request tasks, so that more requests can not be processed in time.
3) Improve the processing performance of services.
implementation
1. Threads (thread pool)
Use an extra thread or thread pool to process the task in a thread other than the IO thread (which processes the request response), and let the response return first in the IO thread.
If the asynchronous thread is processing a task with a very large amount of data designed, you can further optimize it by introducing BlockingQueue. Performance is further improved by having a batch of asynchronous threads continually add data to the blocking queue, and then starting an additional batch or thread to cycle through a batch of data to a preset size from the queue.
2. Message Queue (MQ)
Using message queue (MQ) middleware services, MQ is asynchronous by nature. Some additional tasks may not need to be handled by this system but need to be handled by other systems. At this time, it can be encapsulated into a message, thrown into the message queue, through the reliability of message middleware to ensure that the message delivered to the system concerned about it, and then let other systems to do the corresponding processing.
5, no
And cache
NoSQL is not the same as caching in that the same data storage scheme (such as Redis or Tair) may be used, but not in the same way. In this section, it is used as DB. If the database is used, the availability and reliability of the data storage solution must be ensured.
Usage scenarios
You need to determine whether the data involved in the service is suitable for NoSQL storage, whether the data operation mode is suitable for NoSQL operation, or whether some additional NoSQL features (such as atomic addition and subtraction) are needed.
If service data does not need to be associated with other data, transactions or foreign keys are not required, and data may be written frequently, NoSQL (such as HBase) is preferred. Monitoring and logging systems usually collect a large amount of time sequence data. Such time sequence indicator data is of the type of “read less and write more”. You can use Elasticsearch and OpenTSDB.
6, multithreading and distributed
Usage scenarios
Offline tasks, asynchronous tasks, big data tasks and time-consuming tasks can be accelerated if properly used. Series of multithreaded tutorials please pay attention to the public number Java technology stack reading, are actual combat dry goods.
Note: if there is a high response time requirement on the line, use multithreading as little as possible, especially if the server thread has to wait for the task thread (many major accidents are related to this). If you must use it, you can set a maximum wait time for the server thread.
Common practice
If the processing capacity of a single machine can meet the needs of the actual business, then as far as possible to use single-machine multi-thread processing, reduce complexity; On the contrary, it is necessary to use multi-machine multi-thread mode.
For single-machine multithreading, the mechanism of thread pool can be introduced, which has two functions:
1) Improve performance and save overhead of thread creation and destruction.
2) Limit the flow, give the thread pool a fixed capacity, after this capacity value again, enter the queue to queue, to ensure the machine under the maximum pressure of the stable processing capacity, must carefully understand the meaning of each constructor parameter. Such as Core Pool size, Max Pool size, keepAliveTime, worker Queue, etc., on the basis of understanding, the optimal effect can be achieved by constantly testing and adjusting these parameter values.
If the processing capacity of a single machine cannot meet the demand, multi-machine and multi-thread mode is needed at this time. At this time, you need some knowledge of distributed system, you can choose some open source mature distributed task scheduling system such as XXL-Job.
7. JVM optimization
Your primary back-end language is JAVA, and JVM optimizations can also improve the performance of JAVA programs. The JVM can usually be implemented late in software development, such as at the end of development or at a milestone in software development, where JVM parameters directly affect the performance of JAVA programs.
Performance indicators
Pay attention to the following indicators: CPU usage, CPU load, GC Count, GC Time, and GC logs
Check the GC status of the Java process: jstat -gcutil {pid} 1000
Check the Java process CPU plateau.
1) obtain the Java process pid: ps – ef | grep Java
2) Analyze which thread has too high occupancy: top-h-p ‘PID’
Printf “%x\n” ‘NID’
4) Jstack see thread stack: Jstack PID | grep ‘NID’ – C lines – color
Two Java tools are recommended:
1) show — busy — Java threads
Github.com/oldratlee/u…
2) arthas
Alibaba. Making. IO/arthas/inde…
To optimize the direction
For example, JVM heap size (Xms, Xmx), garbage collection strategy, etc.
To perform JVM level tuning, it is necessary to have a certain understanding of the execution principle of JVM, such as the structure of memory and the type of GC, etc. Then, reasonable JVM parameters are set according to the characteristics of the application program. However, GC tuning is the last task to be done. A guide to JVM tuning is available at jVM46.
Reference:
-
Tech.meituan.com/2016/12/02/…
-
Blog.csdn.net/qq\_4289489…
-
www.cnblogs.com/java-chen-h…