⚠️ this article for nuggets community first contract article, not authorized to forbid reprint
Log4j, Log4j2, and Logback are the mainstream logging frameworks. Slf4j is the new generation of logging frameworks. Logback directly implements the Slf4j interface and is the default logging framework for SpringBoot.
But from a performance and engineering perspective, Log4j2 is the de facto standard on which this article is based.
Why do you need dynamic logging levels
Log4j2 = Log4j2
- Configuration: Industrial projects generally use XML configuration
- Synchronous/Asynchronous: Log4j2 supports both synchronous and asynchronous logging modes. The high-performance asynchronous mode is recommended if no special scenarios require it
- Logger: The Logger node is used to specify the configuration items of the log representation, including but not limited to the log level, Appender, and Name attribute
- Appenders: An Appender is usually only responsible for writing event data to the target, which is triggered by the Logger to the designated Appender.
As qualified programmers, you are well aware of the importance of logging and the basic use of the logging framework.
Most of the time, a log is a great tool to help us locate a problem, but every coin has two sides, and sometimes it can be the trigger.
Is the interface inexplicably slow?
Generally speaking, the response time of the interface is mostly spent on the network, DB layer, I/O layer, or part of the calculation. However, when I once checked the problem on the B end line, I found that the whole interface was destroyed by printing logs.
Due to improper handling of a business exception in the old system, a large number of error logs are output, resulting in the current thread blocking, machine load increase, and the throughput of the entire interface decrease.
It’s easy to define this baffling problem:
- Check whether the disk is abnormal (disk usage and file size).
- Run the Jstack command to check the threads with high usage
- Observe log output (very obvious in abnormal cases)
PS: The solution was to optimize the log output and change the synchronous log to asynchronous log
Online CPU giveaways
When we look at a log, we want it to be accurate, clean, and replicable for analysis by other tools, so the log usually reads something like this:
log.info("Method :: Kerwin.Demo, Params: {}, Result:{}", JSONUtil.toJsonStr(demo), JSONUtil.toJsonStr(demo));
Copy the code
This involves serializing operations, and when the number of such logs is high and the number of interface calls is high (hundreds of thousands of C-side calls per minute), the CPU usage goes up significantly, even if there is no problem with the entire program.
So we need to use the logging framework’s ability to dynamically adjust log levels, so that logs left during the coding phase do not need to be deleted before going online, and can be more flexible for online troubleshooting and daily use.
How do I dynamically configure log levels
For example, in Log4j2, we might configure the ROOT Logger like this.
<Loggers>
<AsyncRoot level="ERROR" includeLocation="true">
<AppenderRef ref="INFO_LOG"/>
<AppenderRef ref="ERROR_LOG"/>
</AsyncRoot>
</Loggers>
Copy the code
Use the following code and look at the log file
@Slf4j
public class LogTest {
public static void main(String[] args) {
log.info("This is a Demo..."); }}Copy the code
Because I changed ROOT’s log level to ERROR, there was no log output
Using the capabilities provided by Log4j2, you can change the log level as follows:
@Slf4j
public class LogTest {
public static void main(String[] args) {
log.info("This is a Demo... 111");
Level level = Level.toLevel("info");
LoggerContext context = (LoggerContext) LogManager.getContext(false);
Configuration configuration = context.getConfiguration();
configuration.getLoggerConfig("ROOT").setLevel(level);
context.updateLoggers(configuration);
log.info("This is a Demo... 222"); }}Copy the code
The results are as follows:
The ability to change log levels is very easy to implement, so when building your own application or system, you can use ZK for dynamic configuration, or you can use HTTP or RPC interfaces to leave a back door to achieve dynamic tuning.
How do I configure the log level for a specified class
The above code demonstrates how to dynamically configure the ROOT Logger node, as you can see if you are using an IDE while reading this article
Org. Apache. Logging. Log4j. Core. Config. Configuration# getLoggers, this method includes every custom logger configuration parameters, can use the same way to configure and modify.
Example code:
String packetLoggerName = "log.demo";
Level level = Level.toLevel("DEBUG");
LoggerContext context = (LoggerContext) LogManager.getContext(false);
Configuration configuration = context.getConfiguration();
for (Map.Entry<String, LoggerConfig> entry : configuration.getLoggers().entrySet()) {
if (packetLoggerName.equals(entry.getKey())) {
entry.getValue().setLevel(level);
}
}
context.updateLoggers(configuration);
Copy the code
Below is our log level configuration:
Question: How does the framework handle classes without a Logger
First the conclusion:
- Each Logger may not have its own configuration
- The actual working Logger is not necessarily itself, as shown above:
log.demo.test.LogTestApp
- Loggers have inheritance, that is
log.demo.test.LogTestApp
Without additional configuration, the parent configuration is used:log.demo
And so on untilROOT
Using the following configuration and code, you can verify:
<Loggers>
<AsyncRoot level="INFO" includeLocation="true">
<AppenderRef ref="Console"/>
</AsyncRoot>
<logger name="log.demo" level="ERROR">
<AppenderRef ref="ERROR_LOG"/>
</logger>
</Loggers>
Copy the code
@Slf4j
public class LogTestApp {
public static void main(String[] args) { System.out.println(log.getName()); }}// Output the result
// log.demo.test.LogTestApp
Copy the code
Looking at the configuration file, it’s clear that the entire logging framework doesn’t actually exist: log.demo.test.LogTestApp Logger. How does it work?
The core code is analyzed as follows:
private static final Logger log = LoggerFactory.getLogger(LogTestApp.class);
Copy the code
By creating the source code in a factory way, you can find the class: AbstractConfiguration
@Override
public LoggerConfig getLoggerConfig(final String loggerName) {
LoggerConfig loggerConfig = loggerConfigs.get(loggerName);
if(loggerConfig ! =null) {
return loggerConfig;
}
String substr = loggerName;
while((substr = NameUtil.getSubName(substr)) ! =null) {
loggerConfig = loggerConfigs.get(substr);
if(loggerConfig ! =null) {
returnloggerConfig; }}return root;
}
Copy the code
The nameutil.getSubname () method gets one level above the full path of the current class and iterates through it to find the closest Logger (which is bounded by the package name and recurses from child to parent).
With these features in mind, we have the answer to our question.
Configure the parent Logger as required
With the advancement of engineering, the hierarchy of system code is very obvious. Taking our company as an example, it can be mainly divided into: Among the four layers of Dao, Service, Business and Api, the core Business is generally placed in the Business layer, so in general, problems will also occur in the Business layer. We can configure the following Logger to dynamically adjust the log level of all Business. To achieve more accurate control:
<Loggers>
<logger name="com.jd.o2o.business" level="INFO">
<! -- INFO_LOG -->
</logger>
</Loggers>
Copy the code
Similarly, if you need to pay attention to the DB layer, you can configure the parent Logger to monitor the DB layer logs.
Dynamically generated Logger
As you can see from the dynamic logging levels mentioned above, if the logging framework supports dynamic refreshing of configuration, then it must support dynamic new configuration (even if the current version does not support this, it is just not yet developed).
By reading the source code, we can see that the following methods can meet our requirements:
org.apache.logging.log4j.core.config.Configuration#addLogger
PS: Personally, it is not recommended to use this method, because the coding is relatively cumbersome and too flexible, which leads to the problem is not easy to troubleshoot
Doubt: unintelligible output
Sometimes we have this problem: I want a Logger to log ERROR and only output to the ERROR file, but it output INFO, ERROR, why?
This is really a log framework design problem, and the core code looks like this:
private void processLogEvent(final LogEvent event, final LoggerConfigPredicate predicate) {
event.setIncludeLocation(isIncludeLocation());
if (predicate.allow(this)) {
callAppenders(event);
}
logParent(event, predicate);
}
Copy the code
When printed, the log event is passed to all Appenders, and finally to the parent Logger, which can result in INFO and ERROR output even though we only configured ERROR.
Look at the following code to find the solution:
// The core code
private void logParent(final LogEvent event, final LoggerConfigPredicate predicate) {
if(additive && parent ! =null) { parent.log(event, predicate); }}// Add additivity to false
<logger name="log.demo" level="INFO" additivity="false"></logger>
Copy the code
Tips for printing a log
In addition to printing correctly, there are a few Tips for log usage, such as:
-
Solution a:
log.info("This is a Demo, Request:{}", JSONUtil.toJsonStr(new LogTest())); Copy the code
-
Scheme 2:
if (log.isInfoEnabled()) { log.info("This is a Demo, Request:{}", JSONUtil.toJsonStr(new LogTest())); } Copy the code
Which of the two ways would you choose? In fact, as you can see from the comparison, option 2 is definitely chosen because it avoids unnecessary serialization.
conclusion
Logging is a programmer’s best friend most of the time, but it can be a deadly risk in subtle situations, so you need to be familiar with its configuration and understand how it works.
Here are some tips on how to use the logging framework:
- Using Slf4j for bridging avoids directly using a particular logging framework
- If the RootLogger and its child Loggers are properly set, the system-dependent frame-level logs can be output to the specified files respectively, facilitating troubleshooting
- Use the dynamic log level to adjust the online log level at any time
- To reduce serialization in logs, check whether the current log level is enabled when using low-level logs to avoid unnecessary serialization
- If no special scenario is required, use high throughput asynchronous logs
If you find this helpful:
- Of course, it’s important to support it
- In addition, you can search and follow the public number “is Kerwin ah”, together on the road of technology to go down ~ 😋