This article is a series. Stay tuned for updates

This article is full of energy, so be prepared before you eat it and give it a thumbs up.

preface

The last we discussed the advantages and disadvantages of the performance of the log and log, a friend said I have no dry, from these mistakes and decided to take a dry, let friends to see w 💪 real strength, discuss about the production how to dynamically according to the request at a time, a user to adjust the level of logging, and even the output file independently. This article code is more, suggest the reader to run.

Imagine a requirement: The sister of the service feedback users online found a BUG, you used to log analysis, a 200 m the head of the log file to see big, finally didn’t find the useful information, you finally decided to open the DEBUG log, restart the project, let the user operation once again, the results opened two minutes online disk alarm, so you have been reviled operations staff, Because stop production environment was a large number of BOSS.

Try being a product manager for yourself and ask:

  • With a small target, can we dynamically adjust levels without stopping?
  • Can we dynamically adjust the log level for a single user or even a single request?
  • Let’s enter the world of the strong, not only according to the user dynamic level, can you temporarily output a file according to the user, request?

After sorting out the above requirements, let’s try to solve them one by one.

Dynamic log level adjustment (Small goal achievement)

Logback and Log4j2 are both native log monitor log files for hot loading, and are relatively simple to use, either in the configuration file or in the configuration file.

<! -- Logback configuration -->
<configuration scan="true" scanPeriod="30 seconds" >.</configuration> 

<! -- log4j2 configuration -->
<Configuration monitorInterval="30">.</Configuration>
Copy the code

Combat 2: Dynamically adjust the log output level

Such a simple implementation could not well meet our needs, so we decided to make persistent efforts, after all, we are pursuing people.

Let’s break it down:

  • To dynamically level by user, we definitely have to differentiate users;
  • To differentiate log levels by request, we need to identify requests;
  • To dynamically adjust logging levels, we need to analyze the implementation of the logging framework;

Let’s tackle them one by one:

  • There are two types of mainstream Web projects: old projects use sessions, while new projects mostly use tokens:
    • Session: It is convenient to add a Session log level.
    • Tokens: Tokens are also not complex and provide a solution to store tokens in caches such as Redis and databases (databases are not recommended).
  • Distinguish requests:
    • Providing the user with a button, along with a single page application, and attaching a parameter to all HTTP requests is a dangerous approach that is not suitable for Internet projects.
    • Special requirements are coded into the code during development.

Distinguish user best practices, enable functions for administrators, list online users, and click users to select the log level output by users.

  • Dynamically adjust the log level, this is where we most intractable, analysis Logback API will find ch. Qos. Logback. Classic. # has been provided by the Logger setLevel method, this method seems to be able to change log output level in the operation process, If we change the Logger level, it will definitely pollute other requests and even other users. It seems that this is not possible. How to solve this problem?

    There are only two options: read the source code and browse the documentation. Let’s start with the source code. Let’s analyze how Logback decides whether to output a log:

    Log. Debug (“A debug log “); FilterAndLog_0_Or3Plus, filterAndLog_0_Or3Plus

    private void filterAndLog_0_Or3Plus(final String localFQCN, 
                                        final Marker marker, 
                                        final Level level, 
                                        final String msg, 
                                        final Object[] params,
                                        final Throwable t) {
      // The magic method
      final FilterReply decision = loggerContext
        .getTurboFilterChainDecision_0_3OrMore(marker, this, level, 
                                               msg, params, t);
      // Determine log level if the magic method above returns NEUTRAL
      if (decision == FilterReply.NEUTRAL) {
        if (effectiveLevelInt > level.levelInt) {
          return;
        }
        // return DENY
      } else if (decision == FilterReply.DENY) {
        return;
      }
      // If you can get to this point, log
      buildLoggingEventAndAppend(localFQCN, marker, 
                                 level, msg, params, t);
    }
    Copy the code

    TurboFilterList has CopyOnWriteArrayList

    . TurboFilterList has CopyOnWriteArrayList

    . TurboFilterList

    Itself a List, including the TurboFilterList# getTurboFilterChainDecision inside logic is loop yourself get FilterReply and back again, to go over the code:


    public FilterReply getTurboFilterChainDecision(final Marker marker, 
                                                   final Logger logger, 
                                                   final Level level, 
                                                   final String format, 
                                                   final Object[] params,
                                                   final Throwable t) {
    
      final int size = size();
      // If there is only one, the result is returned directly. If there is an exception, the result is returned directly
      // size cannot be 0, because the caller has already judged it, which is not shown here
      if (size == 1) {
        try {
          TurboFilter tf = get(0);
          return tf.decide(marker, logger, level, 
                           format, params, t);
        } catch (IndexOutOfBoundsException iobe) {
          returnFilterReply.NEUTRAL; }}// If there are more than one, the loop returns the first result that is not NEUTRAL
      Object[] tfa = toArray();
      final int len = tfa.length;
      for (int i = 0; i < len; i++) {
        // for (TurboFilter tf : this) {
        final TurboFilter tf = (TurboFilter) tfa[i];
        final FilterReply r = tf
          .decide(marker, logger, level, format, params, t);
        if (r == FilterReply.DENY || r == FilterReply.ACCEPT) {
          returnr; }}return FilterReply.NEUTRAL;
    }
    Copy the code

    Why does the author separate the logic for one TurboFilter from the logic for multiple TurboFilters? Follow TurboFilter for answers (there’s an extra explanation as a bonus for wechat fans).

    Comb train of thought, each log, regardless of level, will eventually enter filterAndLog_0_Or3Plus method to determine whether the output, and the judge will priority to judge TurboFilterList# getTurboFilterChainDecision return values, Used getTurboFilterChainDecision TurboFilter, follow-up source we find TurboFilterList is empty, we almost can assert that the author doesn’t refer to an empty List, must be used to extend to us, we go to read the document, I’m sure you can find the extension method. Looking through the documentation for TurboFilters, we find the following description:

    TurboFilters

    TurboFilter objects all extend the TurboFilter abstract class. Like the regular filters, they use ternary logic to return their evaluation of the logging event.

    Overall, they work much like the previously mentioned filters. However, there are two main differences between Filter and TurboFilter objects.

    TurboFilter objects are tied to the logging context. Hence, they are called not only when a given appender is used, but each and every time a logging request is issued. Their scope is wider than appender-attached filters.

    More importantly, they are called before the LoggingEvent object creation. TurboFilter objects do not require the instantiation of a logging event to filter a logging request. As such, turbo filters are intended for high performance filtering of logging events, even before the events are created.

    TurboFilter also controls whether or not logs can be output, and has a higher priority than regular filters. This is not surprising, given Turbo.

Given the examples in the documentation, let’s define our own TurboFilter.

/**
 * DynamicLoggingFilter
 *
 * @author jiyanwai
 * @dateThe 2020-01-15 16:09:16 * /
@Slf4j
public class DynamicLoggingFilter extends TurboFilter {

  public static final String DEBUG_HEADER_NAME = "X-Debug";
  public static final String DEBUG_SESSION_KEY = DEBUG_HEADER_NAME;

  /** * Return FilterReply.DENY, FilterReply.NEUTRAL or FilterReply. Log output judgment logic * *@returnFilterreply.deny, filterReply.neutral or FilterReply.accept */
  @Override
  public FilterReply decide(Marker marker, Logger logger, Level level, String format, Object[] params, Throwable t) {

    // We can filter our own logger here
    boolean isMyLogger = logger.getName().startsWith("com.jiyuanwai");
    if(! isMyLogger) {return FilterReply.NEUTRAL;
    }

    RequestAttributes requestAttributes = RequestContextHolder
        .getRequestAttributes();

    // There may be no RequestAttributes when the project starts or runs a timer
    if (requestAttributes == null) {
      return FilterReply.NEUTRAL;
    }

    // RequestHeader is used to distinguish threads
    ServletRequestAttributes servletRequestAttributes =
        (ServletRequestAttributes) requestAttributes;
    // Based on the request parameters, the actual production environment can develop functions for the management functions, and put the user's unique identifier into the cache or session
    HttpServletRequest request = servletRequestAttributes.getRequest();
    String debug = request.getHeader(DEBUG_HEADER_NAME);
    boolean debugBoolean = Boolean.parseBoolean(debug);
    if (debugBoolean) {
      return FilterReply.ACCEPT;
    }

    // Determine the Session
    HttpSession session = request.getSession(false);
    if (session == null) {
      return FilterReply.NEUTRAL;
    }
    Object attribute = session.getAttribute(DEBUG_SESSION_KEY);
    debugBoolean = Boolean.parseBoolean((String) attribute);
    if (debugBoolean) {
      return FilterReply.ACCEPT;
    }

    returnFilterReply.NEUTRAL; }}Copy the code

The log configuration file is modified as follows:

<configuration scan="false" debug="false">
    <turboFilter class="com.jiyuanwai.logging.extend.dynamic.DynamicLoggingFilter" />
    ...
    <root level="error">
        <appender-ref ref="STDOUT"/>
    </root>
</configuration>
Copy the code

Write a method to test it

@GetMapping
public void debugLevel(a) {
  log.debug("A debug Log");
}
Copy the code

It is recommended that you use the HTTP Request built into Idea:

# test RequestHeader use GET http://localhost:8082/log/level Accept: * / * # by the Header, avoid pollution request parameters could be a problem (or file uploads) X - the Debug: true Cache-Control: no-cache ###Copy the code

After execution, you can see on the console:

22:50:19816. DEBUG [082-exec-5] c.j.l.e.c.LogController       (  35) - A debug Log
Copy the code

We have completed adjusting the log dynamically to the request.

I will continue to test Session. I will simply deal with it here, and only add Controller as a test. It can be implemented online with the online user management function.

@PutMapping
public void startDebugBySession(HttpSession session) {
  // For testing only, you need to develop the function on the line, get the corresponding user session, and then put the property
  session.setAttribute(DynamicLoggingFilter.DEBUG_SESSION_KEY, "true");
}
Copy the code

The request for enabling session-level debug is as follows

# to the current user Session open the DEBUG mode PUT http://localhost:8082/log/level Accept: * / * cache-control: no - Cache # # #Copy the code
# remove custom Header, test again GET http://localhost:8082/log/level Accept: * / * cache-control: no - Cache # # #Copy the code

The results are as follows

09:28:16662. DEBUG [082-exec-1] c.j.l.e.c.LogController       (  40) - A debug Log
Copy the code

Tocken is almost indistinguishable from Session, which I won’t show here.

At this point, we have elegantly implemented dynamic logging by request, by user, and can be proud of ourselves.

Real Combat: The World of the Strong

Let’s implement the final requirement, dynamic output of log files.

Let’s first analyze the requirements, and if we want to differentiate users by their temporary output files, we can use the previous method, and the question we need to solve is, how do we output a particular user to a particular file?

As anyone who has read a previous article knows, it is the Appender that is responsible for exporting logs. We first read the documentation to see if there is a solution. After a bit of hard work, we found that the combination of SiftingAppender with Discriminator and MDC seemed to solve our needs. SiftingAppender can separate log events based on user sessions so that logs generated by different users can go into different log files, or even one log file per user. Discriminator is easy to understand and literally means that a discriminator can be used to discriminate, but what is MDC?

“Mapped Diagnostic Context” is essentially a map maintained by the logging framework where the application code provides key-value pairs which can then be inserted by the logging framework in log messages. MDC data can also be highly helpful in filtering messages or triggering certain actions.

MDC is a Map maintained by the logging framework that can be used to filter and trigger operations. It summarizes two best practices:

  • For those of you who have read the previous articles, you will remember the super SessionId and RequestId I mentioned. How to implement them? Is, of course,MDC, see below.
  • The other isdiscriminatorCooperate withMDCSo we’re ready to output files dynamically, so let’s try it out.

Implement the idea, use Filter and execute first, directly look at the code

/** * LogFilter is used to process dynamic logs **@author jiyanwai
 * @dateThe 2020-01-16 06:05:19 * /
@Slf4j
@Component
@Order(Integer.MIN_VALUE)
@WebFilter("/ *")
public class LogFilter implements Filter {

  public static final String REQUEST_ID_MDC_KEY = "requestId";
  public static final String SESSION_ID_MDC_KEY = "sessionId";
  public static final String STAND_ALONE_HEADER_KEY = "X-StandAlone-File";
  public static final String STAND_ALONE_SESSION_KEY = STAND_ALONE_HEADER_KEY;

  @Override
  public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain)
      throws IOException, ServletException {

    try {
      try {
        HttpServletRequest request = (HttpServletRequest) servletRequest;
        // requestId and super SessionId can be customized as required
        String requestId = CommonUtils.generateRequestId();
        MDC.put(REQUEST_ID_MDC_KEY, requestId);
        String sessionId = request.getRequestedSessionId();
        MDC.put(SESSION_ID_MDC_KEY, sessionId);

        // MDC is added only when independent file output is enabled
        String standAlone = request.getHeader(STAND_ALONE_HEADER_KEY);
        if (standAlone == null) {
          standAlone = (String) request.getSession().getAttribute(STAND_ALONE_SESSION_KEY);
        }
        if(standAlone ! =null) {
          // You can customize any different path here. The sessionId will be concatenated to the file name. Refer to the modified XML belowMDC.put(STAND_ALONE_SESSION_KEY, sessionId); }}catch (Exception e) {
        // The exception will not affect services
        log.error("Error handler dynamic log", e);
      }
      // Continue without handling other Filter exceptions
      filterChain.doFilter(servletRequest, servletResponse);
    } finally {
      // Remember to clean up the environmentMDC.clear(); }}}Copy the code

Added SiftAppender to profile

<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
  <discriminator>
    <key>X-StandAlone-File</key>
    <! -- If the MDC node cannot be obtained, the default value is all -->
    <defaultValue>all</defaultValue>
  </discriminator>
  <sift>
    <! -- Appender Name needs to be independent -->
    <appender name="FILE-${X-StandAlone-File}" class="ch.qos.logback.core.FileAppender">
      <! -- The path and name of the log file being recorded -->
      <! -- Output location, ${x-standalone -File} fetch MDC -->
      <file>${LOG_PATH}/${X-StandAlone-File}.log</file>
      <! -- Logger scroll policy, by date, by size -->
      <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
        <! -- Scroll by day -->
        <fileNamePattern>${LOG_PATH}/%d{yyyy-MM}/log-%d{yyyy-MM-dd-HH}-%i.${X-StandAlone-File}.gz</fileNamePattern>
        <! -- Single file maximum 50M -->
        <maxFileSize>50MB</maxFileSize>
        <! -- Up to 5G of disk space, 500 files (no more than 5G in total) -->
        <maxHistory>500</maxHistory>
        <totalSizeCap>5GB</totalSizeCap>
      </rollingPolicy>
      <! -- Append log -->
      <append>true</append>
      <! -- Log file format -->
      <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
        <pattern>${FILE_LOG_PATTERN}</pattern>
        <charset>utf-8</charset>
      </encoder>
    </appender>
  </sift>
  <root level="error">
    <! For easy testing, output to console at the same time
    <appender-ref ref="STDOUT"/>
    <appender-ref ref="SIFT"/>
  </root>
</appender>
Copy the code

Adjust the Controller addition method

/** * Enable the debug function for the current user. In the production environment, cooperate with online user management to implement the debug function@param session session
	*/
@PutMapping
public void startDebugBySession(HttpSession session,
                                @RequestParam(required = false) boolean standAlone) {
  // For testing only, you need to develop the function on the line, get the corresponding user session, and then put the property
  session.setAttribute(DynamicLoggingFilter.DEBUG_SESSION_KEY, "true");
  if (standAlone) {
    session.setAttribute(LogFilter.STAND_ALONE_SESSION_KEY, ""); }}Copy the code

Use Header mode first to test the file structure without independent output enabled

# test RequestHeader use GET http://localhost:8082/log/level Accept: * / * # by the Header, avoid pollution request parameters could be a problem (or file uploads) X - the Debug: true Cache-Control: no-cache ###Copy the code

RequestId and super SessionId were both written to the log successfully

2020-01-16 09:54:05827. DEBUG [http-nio-8082-exec-1] c.j.l.extend.controller.LogController    --- requestId=ee77a879576147bcafdcb745ba5767d3, sessionId=21F4DE2AADBA675F2135601D760BF18E : A debug Log
Copy the code

Enable debug using Http Request for testing, and enable separate files

# to the current user Session open the DEBUG mode PUT http://localhost:8082/log/level Accept: * / * cache-control: no - Cache the content-type: application/x-www-form-urlencoded standAlone=true ###Copy the code

Test Log Output

# test Session model using the GET http://localhost:8082/log/level Accept: * / * cache-control: no - Cache # # #Copy the code

extension

We used the log framework to solve the dynamic level, but the problem is that it is not convenient to modify, so we need to use VIM to modify the server on the connection line. As mentioned above, Logback provides an API to directly modify the dynamic level. Let’s take a look:

/** * Changes a single log level **@paramLoggerName Log instance name *@paramLevel level * /
@PostMapping
public void changeLoggingLevel(String loggerName,
                               @RequestParam(required = false, 
                                             defaultValue = "DEBUG") String level) {
  ch.qos.logback.classic.Logger logger =
    (ch.qos.logback.classic.Logger) LoggerFactory.getLogger(loggerName);
  logger.setLevel(Level.toLevel(level));
}
Copy the code

Test it out:

# test adjust log level global POST http://localhost:8082/log/level Accept: * / * cache-control: no - Cache the content-type: application/x-www-form-urlencoded loggerName=ROOT ###Copy the code

If we want to restore:

/** * Resets the log level to the configuration file default */
@DeleteMapping
public void restoreLoggingLevel(a) {
  LoggerContext context = (LoggerContext) LoggerFactory.getILoggerFactory();

  try {
    JoranConfigurator configurator = new JoranConfigurator();
    configurator.setContext(context);
    context.reset();
    configurator.doConfigure(new ClassPathResource("logback.xml").getInputStream());
  } catch (JoranException | IOException ignore) {
    // StatusPrinter will handle this
  }
  StatusPrinter.printInCaseOfErrorsOrWarnings(context);
}
Copy the code

Continue to test

# test the restore log level for the level of configuration file DELETE http://localhost:8082/log/level Accept: * / * cache-control: no - Cache # # #Copy the code

conclusion

In this article, we have implemented the dynamic output of levels and files according to the request and user level, and now it should be able to meet 99% of the scenarios. We have only used Logback outside. If readers want to use Log4j2, they can try to implement it themselves. In the next post, I’ll do some errata, reader feedback, and a few best practices. Stay tuned for updates.

Finally, I will post a mind map of the analysis process, hoping to help you learn to analyze and solve problems.

The above is a personal point of view, if there is a question or error, welcome to leave a message to discuss correction, code word is not easy, if you feel good, for attention, for praise, for forwarding.

Scan the code to pay attention to the public number, the first time to get updated

Code: github.com/jiyuanwai/l…