01. Talk about what
People all say that the road is not even, others ride a horse, I ride a donkey, look back at the cart man, less than more than the next!
You ride a horse to me ride a donkey, look at my eyes as; Look back at cart Han, less than more than the next.
Turn head to sum up the pit that had filled in the past, feel oneself return really quite cow break off.
Today, no small talk, no deviant theme, I just want to give you a laundry list of experiences to prevent you from falling into the pit again.
Have a walnut. Sit tight. Hold on. Here we go.
02. Open chat
Lesson 1: The CPU usage is 100% for a long time
Problem phenomenon:
In the case of multi-user concurrency, the CPU utilization is 100% for a long time. Thread information is dumped and the threads with high CPU utilization are found to be related to HashMap operations.
Cause analysis:
-
The selection of a non-thread-safe container in the case of concurrency is not guaranteed, and HashMap is non-thread-safe;
-
HashMap In multi-threaded situations, scaling can easily lead to an infinite loop, resulting in 100% CPU utilization.
Solutions:
-
Avoid using hashMaps in concurrent scenarios;
-
In concurrent scenarios, if HashMap must be used, use synchronous locks or use ConcurrentHashMap instead of HashMap.
Lesson 2: Database access is slow
Problem phenomenon:
The database receives a large number of requests and responds slowly.
Cause analysis:
With indexes available, the database allocates fewer connections and lacks caching.
Solutions:
-
For applications that need to connect to a database, consider whether the results of the query can be reused.
-
For the query results of low timeliness and need to call the request, do cache often can save database resources, but also greatly improve the efficiency of the application itself.
Experience three: the number of system connections is huge
Problem phenomenon:
The number of system connections is huge
Cause analysis:
The default communication mode of the Tomcat container is TCP/IP + BIO. This mode is not suitable for large concurrency. The Socket generated in BIO mode consumes too many local resources, and the establishment of Socket connections is slow and the number of connections supported is limited.
Generally, accept is adopted to obtain the Socket and then a thread is used to process the Socket. One connection One thread requires an exclusive thread regardless of whether the connection has a real data request.
If we need to support a large number of connections on the server side, but the peak number of requests sent by these connections at the same time is not very large, it is generally recommended to replace NIO mode.
Solutions:
Instead of TCP/IP+NIO, the NIO model is built in and is easy to call by modifying the configuration file conf/server.xml. In the configuration file protocol modified into org. Apache. Coyote. Http11. Http11NioProtocol, restart can take effect.
The following configuration has been modified. The default protocol is HTTP/1.1
<Connector port="8080"
protocol="org.apache.coyote.http11.Http11NioProtocol"
connectionTimeout="20000"
redirectPort="8443"
maxThreads="500"
minSpareThreads="20"
acceptCount="100"
disableUploadTimeout="true"
enableLookups="false"
URIEncoding="UTF-8" />
Copy the code
Lesson 4: Versioning problems using HttpClient
Problem phenomenon:
Not every request was returned correctly. The remote server returned an error. (417) Unkown
Cause analysis:
The system uses version 4.0.3, and Expect:100-continue is on by default, that is, each time the client communicates with the server, it sends a request to the server to see if the server can handle the request. Usually used for uploading special files or for large data interactions, the time_wait may be large.
This problem is resolved in HttpClient 4.1, which disables this function by default.
Solutions:
If the version is later than 4.1, this function is disabled by default.
Lesson 5: Problems with logging performance using Log4j
Problem phenomenon:
Log I/O occupies a large number of system resources, resulting in high CPU usage and low TPS
org.apache.log4j.Category.callAppenders
ch.qos.logback.core.OutputStreamAppender.subAppend
Log related methods.
Copy the code
Cause analysis:
The amount of logs is too large and I/OS are frequent. Logs have an impact on system performance in the following aspects
-
Some options for log output are very slow, such as C/class, F/file, L/line, L, and M/method. Do not use them.
-
Log output is double. Some applications output service logs to the console and another file at the same time, or log information is output twice in the same file.
-
The destination of log output, which is slower to the console than to the file system;
-
Different log output formats also have an impact on performance, such as simple output layout (PatternLayout) output faster than formatted output layout. You can output log information in a simple layout format as required.
-
The lower the log level is, the more logs are generated, which has a great impact on the system.
-
Different log output modes have certain effects on system performance. The asynchronous output mode has higher performance than the synchronous output mode.
-
Printing log content every time a log output event is received is lower than printing performance when the log content reaches a certain size.
Solutions:
-
Simplify the log output content, set the log output format properly, and avoid using the option that is too slow.
-
Set log cache, and cache size;
-
Output only one copy of the business log to the file system (log4j is used as an example to output multiple copies of the log) :
The same log is output to two files
log4j.rootLogger=DEBUG, stdout, system
Output DEBUG logs to stdout and System destinations.
This configuration will make the log output in two copies,
Change this line to log4j.rootLogger=DEBUG, system;
Indicates that logs are output to only the system destination.
Copy the code
Lesson 6: LogBack has better performance than Log4J
Use logBack and log4J logging components for log printing.
When the log level is DEBUG, the average TPS ratio of the system is logback: log4j=1.31:1
When the log level is INFO, the average TPS ratio is logback: log4j=1.03:1
When the output of logs is large, logback has an advantage over LOG4J in terms of processing speed. In experiments, the processing capability of Logback is improved by 3% to 30% compared with that of Log4J.
Lesson 7: You have already shared a laundry list, but you can’t fit the laundry list any more, lest you too smart to digest. (Destined to continue)
03. Chat tomorrow
In the past engaged in the research and development of the bit of experience, for you, to prevent you from falling into the pit, and you can also brag in the interview (ha ha).
If you feel a little bit interesting, don’t praise, just click on the bottom right corner of the “watching”, or share more to your friends will be very grateful.