Operation logs are widely used in various B-terminal and some C-terminal systems. For example, customer service can quickly know who has done what operation on the work order according to operation logs of the work order, so as to quickly locate problems. Operation logs are different from system logs. Operation logs must be easy to understand. Therefore, how can operation logs not be coupled with service logic, and how can operation logs be easily understood and accessed? These are the questions that this article will answer, focusing on how to “elegantly” log operations.
1. Application scenarios of operation logs
Difference between system logs and operation logs
System logs: System logs provide references for troubleshooting and are usually printed in log files. System logs are less readable and contain code information, such as a log printed on a line of a class.
Operation log: The operation log is mainly used to record the operation or modification of an object after the operation is added or modified. Operation log requires high readability, because it is mainly for users to see, such as logistics information of orders, and users need to know what happened at what time. For example, customer service to work order processing record information.
Operation logs are recorded in the following formats:
- Pure transcript, such as: order creation at 2021-09-16 10:00.
- Simple dynamic text record, such as: 2021-09-16 10:00 Order creation, order NO.11089999, involving variable order NO.11089999.
- Text of modification type, including the value before and after modification, for example: 2021-09-16 10:00 user Xiaoming modified the delivery address of the order from “Jincancan Community” to “Yinyanzhan Community”, which involves the original address “Jincancan Community” and the new address “Yinyanzhan Community” of variable distribution.
- Modify the form, changing multiple fields at once.
2. Implementation method
2.1 Use the Canal listening database to record operation logs
Canal is an open source component that provides subscription and consumption of incremental data based on incremental log parsing of the MySQL database. By listening to the Binlog of the database, you can know from the bottom which data has been modified, and then record operation logs based on the changed data.
This approach has the advantage of complete separation from the business logic. The disadvantages are also obvious, the limitation is too high, only for the database changes to do the operation log, if the change involves other teams’ RPC calls, there is no way to listen to the database, for example: The notification service is usually a common component within the company. In this case, the operation logs of sending notifications can only be manually recorded during RPC calls.
2.2 Records in Log files
Log.info (" order created, orderNo :{}", orderNo) log.info(" order created, orderNo :{}", orderNo) log.info(" order created, orderNo :{}", orderNo) log.info(" order created, orderNo :{}", orderNo) log.info(" order created, orderNo :{}", orderNo)Copy the code
Three problems need to be solved in this way of recording operations.
Question 1: How does the operator record
With the help of the MDC utility classes in SLF4J, the operator is placed in a log and then uniformly printed in the log. The user’s identity is first Put into MDC in the user’s interceptor.
@Component
public class UserInterceptor extends HandlerInterceptorAdapter {
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
// Get the user id
String userNo = getUserNo(request);
// Put the user ID in the MDC context
MDC.put("userId", userNo);
return super.preHandle(request, response, handler);
}
private String getUserNo(HttpServletRequest request) {
// Obtain the current login user information through SSO or Cookie or Auth information
return null; }}Copy the code
Second, format the userId into the log, and use %X{userId} to retrieve the MDC userId.
<pattern>"%d{yyyy-MM-dd HH:mm:ss.SSS} %t %-5level %X{userId} %logger{30}.%method:%L - %msg%n"</pattern>
Copy the code
Question 2: How can operation logs be distinguished from system logs
By configuring the Log configuration file, the logs related to operation logs are placed in a separate Log file.
// Different business logs are recorded to different files<appender name="businessLogAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<File>logs/business.log</File>
<append>true</append>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>INFO</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>A. logs/business % d. % i.l og</fileNamePattern>
<maxHistory>90</maxHistory>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>10MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
<encoder>
<pattern>"%d{yyyy-MM-dd HH:mm:ss.SSS} %t %-5level %X{userId} %logger{30}.%method:%L - %msg%n"</pattern>
<charset>UTF-8</charset>
</encoder>
</appender>
<logger name="businessLog" additivity="false" level="INFO">
<appender-ref ref="businessLogAppender"/>
</logger>
Copy the code
It then logs the business separately in the Java code.
// A declaration for logging a specific log
private final Logger businessLog = LoggerFactory.getLogger("businessLog");
// Log storage
businessLog.info("Modified shipping address");
Copy the code
Question 3: How to generate readable log copy
Log templates can be generated in LogUtil mode or in faceted mode, as described in the following sections. This allows you to store logs in a separate file, and log collection allows you to store logs in Elasticsearch or a database. Let’s see how to generate readable operation logs.
2.3 Logging In LogUtil Mode
LogUtil.log(orderNo, "Order Creation"."Xiao Ming") template LogUtil. Log (orderNo,"Order creation, Order Number"+"NO.11089999"."Xiao Ming")
String template = "User %s changed the shipping address of the order from '%s' to' %s'"
LogUtil.log(orderNo, String.format(tempalte, "Xiao Ming"."Golden Community"."Yinzhanzhan Community"), "Xiao Ming")
Copy the code
To explain why operation logs are bound to an OrderNo, operation logs record what “something” was done by a “time” and “who” to “what”. When the operation log of the business is queried, all operations of this order will be queried, so OrderNo is added in the code. The operator needs to be recorded when the operation log is recorded, so the operator “Xiao Ming” is passed in.
The above doesn’t seem to be a big problem. A single line of code is used to log the operation in the business logic method that changes the address. Let’s look at a more complex example:
private OnesIssueDO updateAddress(updateDeliveryRequest request) {
DeliveryOrder deliveryOrder = deliveryQueryService.queryOldAddress(request.getDeliveryOrderNo());
// Update shipping information, phone number, recipient, address
doUpdate(request);
String logContent = getLogContent(request, deliveryOrder);
LogUtils.logRecord(request.getOrderNo(), logContent, request.getOperator);
return onesIssueDO;
}
private String getLogContent(updateDeliveryRequest request, DeliveryOrder deliveryOrder) {
String template = "User %s changed the shipping address of the order from '%s' to' %s'";
return String.format(tempalte, request.getUserName(), deliveryOrder.getAddress(), request.getAddress);
}
Copy the code
You can see that the above example uses two method code, plus a function called getLogContent to record the operation log. When the business becomes complicated, recording operation logs in the business code will lead to complicated business logic, and finally, the invocation of logutils.logRecord () method exists in many business codes, and methods like getLogContent() are scattered in various business classes. This is a disaster for code readability and maintainability. Here’s how to avoid this disaster.
2.4 Method annotations implement operation logs
To address these issues, AOP logging is typically used to decouple operational logging from business logic. Let’s look at a simple EXAMPLE of AOP logging.
@logRecord (content=" change shipping address ")
public void modifyAddress(updateDeliveryRequest request){
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
We can keep a fixed copy in the annotated operation log so that the business logic and business code can be decoupled and our business code can be cleaned up. Some students may have noticed that although the above method decouples the operation log code, the recorded copy does not meet our expectations. The copy is static and does not contain dynamic copy, because the operation log we need to record is: user %s changes the delivery address of the order from “%s” to “%s”. Next, we’ll show you how to gracefully use AOP to generate dynamic operation logs.
Gracefully support AOP to generate dynamic operation logs
3.1 Dynamic Template
When it comes to dynamic templates, it involves having variables parse the template through placeholders for the purpose of logging operations through annotations. There are many ways to parse templates, and SpEL (Spring Expression Language) is used here. We can write down how we want to log, and then see if we can do it.
@logRecord (content = "#oldAddress", "#request.address ")
public void modifyAddress(updateDeliveryRequest request, String oldAddress){
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
Using SpEL expressions to reference parameters on methods, you can populate the template with variables to achieve dynamic operation log text content. But there are still several problems to solve:
- Operation logs need to know which operator modifies the delivery address of an order.
- The operation log of changing the shipping address of an order must be bound to the shipping order, so that all operations on the shipping order can be queried based on the shipping order number.
- Adding a business-independent oldAddress variable to the method signature in order to record what the previous shipping address was on the annotation is not elegant.
To solve the first two problems, we need to change the expected usage of operation logs to the following:
@logRecord (content = "@logrecord") Operator ="# request.username ", bizNo="# request.deliveryorderno ")
public void modifyAddress(updateDeliveryRequest request, String oldAddress){
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
The modified code adds two parameters to the annotation, one for the operator and one for the object to be bound to the operation log. However, in the ordinary course of user information in the Web application are stored in a thread in the context of a static method, so the operator is usually written as (assume that the way for current login user is UserContext. GetCurrentUser ()).
operator = "#{T(com.meituan.user.UserContext).getCurrentUser()}"
Copy the code
So each @logRecord annotation has this long list of operators. To avoid too much repetitive code, we can make the operator parameter on the annotation optional so that the user can fill in the operator. However, if the user does not fill in, we get the user of the UserContext (how to get user is described below). Finally, the simplest log becomes this:
@logRecord (content ="# oldAddress ", "# request.deliveryorno ", bizNo="# request.deliveryorderNo ")
public void modifyAddress(updateDeliveryRequest request, String oldAddress){
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
Next, we need to solve the third problem: adding an oldAddress variable to record business operations is not a good implementation anyway, so next, we need to remove the oldAddress variable from the method signature that changes the address. But the operation log does need the oldAddress variable, so what?
Either PK with the product manager and ask the product manager to change the copy from “changed the delivery address of the order from XX to YY” to “changed the delivery address of the order to yy”. But from the point of view of user experience, the first copy is more humanized, obviously we will not PK success. Then we have to query this oldAddress and use it for the operation log. Another workaround is to put this parameter in the thread context of the operation log for use by the template on the annotations. Let’s change the implementation code of operation log in this way.
@logRecord (content ="# oldAddress ", "# request.deliveryorno ", bizNo="# request.deliveryorderNo ")
public void modifyAddress(updateDeliveryRequest request){
// Where is the original address
LogRecordContext.putVariable("oldAddress", DeliveryService.queryOldAddress(request.getDeliveryOrderNo()));
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
As you can see, LogRecordContext solves the problem of using variables other than method parameters on the action log template and avoids changing the design of the method signature for logging the action log. Although it is better than the previous code, but still need to add a line of business logic in the business code, if you have “obsessive” students can also continue to read, next we will explain the solution to the custom function. Here’s another example:
@logRecord (content ="# oldDeliveryUserId ", "# request.deliveryorderNo ", bizNo="# request.deliveryorderNo ")
public void modifyAddress(updateDeliveryRequest request){
// Where is the original address
LogRecordContext.putVariable("oldDeliveryUserId", DeliveryService.queryOldDeliveryUserId(request.getDeliveryOrderNo()));
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
The template of this operation log finally records the content in this format: the deliveryman who changed the order: from “10090” to “10099”, obviously the user does not understand this operation log. The user does not know whether the user ID is 10090 or 10099. What the user expects to see is: the deliveryman who has changed the order from “Zhang SAN (18910008888)” to “Xiao Ming (13910006666)”. What users care about is the name and number of the delivery person. However, the parameter passed in our method is only the ID of the deliveryman, not the name of the deliveryman. We can use the above method to query the user name and phone number, and then LogRecordContext implementation.
However, the obsession is not to expect the operation log code to be embedded in the business logic. Next, let’s consider another implementation: custom functions. We can solve this problem if we can convert the user ID to the user name and phone number using a custom function. With this in mind, we modify the template to look like this:
@logRecord (content = "@logRecord") Change from "{deliveryUser{#oldDeliveryUserId}}" to "{deveryUser{# request.userid}} ", bizNo="# Request.deliveryorderno ")
public void modifyAddress(updateDeliveryRequest request){
// Where is the original address
LogRecordContext.putVariable("oldDeliveryUserId", DeliveryService.queryOldDeliveryUserId(request.getDeliveryOrderNo()));
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
Where deliveryUser is a custom function that wraps Spring’s SpEL expression in curly braces. One is to distinguish SpEL (Spring Expression Language) from custom functions for easy parsing. Second, if the template does not need SpEL expression parsing can be easily identified, reducing SpEL parsing to improve performance. At this point we find that the above code can be optimized to the following form:
@logRecord (content = "@logRecord") {queryOldUser{# request.deliveryorderNo ()}} to {deveryUser{#request.userId}} ", bizNo="# request.deliveryorderNo ")
public void modifyAddress(updateDeliveryRequest request){
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
So there is no need to pass in modifyAddress method LogRecordContext. PutVariable () set up the old Courier, by directly adding a custom function queryOldUser () parameter passed in the delivery order, To find the previous deliverer, simply run the method resolution before the modifyAddress() method is executed. In doing so, we clean up the business code again and take the compulsion out of the equation.
4. Code implementation parsing
4.1 Code Structure
The above operation log is mainly realized through an AOP interceptor, the whole is mainly divided into AOP module, log parsing module, log saving module, Starter module; The component provides four extension points, namely: custom functions, default handlers, business save and query; Businesses can customize their own business logic based on their own business characteristics.
4.2 Module Introduction
With the above analysis in mind, we have come up with a desired way to log operations, so let’s look at how to implement the above logic. The realization is mainly divided into the following steps:
- AOP interception logic
- Parsing logic
- The template parsing
- LogContext logic
- The default operator logic
- Custom function logic
- Default log persistence logic
- Starter encapsulation logic
4.2.1 AOP interception logic
This logic is basically an interceptor that parses the operation logs that need to be recorded for the @LogRecord annotation and persists the operation logs. The annotation is named @LogRecordannotation. Next, let’s look at the definition of annotations:
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@Inherited
@Documented
public @interface LogRecordAnnotation {
String success(a);
String fail(a) default "";
String operator(a) default "";
String bizNo(a);
String category(a) default "";
String detail(a) default "";
String condition(a) default "";
}
Copy the code
In addition to the parameters mentioned above, the annotations also add parameters such as fail, category, detail and condition. These parameters are designed to meet specific scenarios. Specific examples will be given later.
Parameter names | describe | If required |
---|---|---|
success | Text template for operation logs | is |
fail | Text version of operation log failure | no |
operator | The executor of the operation log | no |
bizNo | Identifies the business object bound to the operation log | is |
category | Type of operation logs | no |
detail | Extended parameters to record operation log modification details | no |
condition | Conditions for logging | no |
To keep things simple, the component has only two required parameters. Most of the AOP logic in the business is implemented using the @Aspect annotation, but annotation-based AOP is not compatible with Spring Boot 1.5. Components implement Spring’S AOP logic manually for compatibility with the Spring Boot1.5 version.
Section choice AbstractBeanFactoryPointcutAdvisor implementation, tangent point is through StaticMethodMatcherPointcut matching contains LogRecordAnnotation annotation methods. Implement the enhanced logic of operation logs by implementing the MethodInterceptor interface.
Here is the interceptor’s pointcut logic:
public class LogRecordPointcut extends StaticMethodMatcherPointcut implements Serializable {
// LogRecord parse class
private LogRecordOperationSource logRecordOperationSource;
@Override
public boolean matches(@NonNull Method method, @NonNullClass<? > targetClass) {
// Parse the @logRecordannotation annotation on the method. If so, parse the annotations for each parameter
return! CollectionUtils.isEmpty(logRecordOperationSource.computeLogRecordOperations(method, targetClass)); }void setLogRecordOperationSource(LogRecordOperationSource logRecordOperationSource) {
this.logRecordOperationSource = logRecordOperationSource; }}Copy the code
The main code of section enhancement logic is as follows:
@Override
public Object invoke(MethodInvocation invocation) throws Throwable {
Method method = invocation.getMethod();
// Log
return execute(invocation, invocation.getThis(), method, invocation.getArguments());
}
private Object execute(MethodInvocation invoker, Object target, Method method, Object[] args) throws Throwable { Class<? > targetClass = getTargetClass(target); Object ret =null;
MethodExecuteResult methodExecuteResult = new MethodExecuteResult(true.null."");
LogRecordContext.putEmptySpan();
Collection<LogRecordOps> operations = new ArrayList<>();
Map<String, String> functionNameAndReturnMap = new HashMap<>();
try {
operations = logRecordOperationSource.computeLogRecordOperations(method, targetClass);
List<String> spElTemplates = getBeforeExecuteFunctionTemplate(operations);
// Custom function parsing before business logic execution
functionNameAndReturnMap = processBeforeExecuteFunctionTemplate(spElTemplates, targetClass, method, args);
} catch (Exception e) {
log.error("log record parse before function exception", e);
}
try {
ret = invoker.proceed();
} catch (Exception e) {
methodExecuteResult = new MethodExecuteResult(false, e, e.getMessage());
}
try {
if (!CollectionUtils.isEmpty(operations)) {
recordExecute(ret, method, args, operations, targetClass,
methodExecuteResult.isSuccess(), methodExecuteResult.getErrorMsg(), functionNameAndReturnMap);
}
} catch (Exception t) {
// Logging errors do not affect services
log.error("log record parse exception", t);
} finally {
LogRecordContext.clear();
}
if(methodExecuteResult.throwable ! =null) {
throw methodExecuteResult.throwable;
}
return ret;
}
Copy the code
Intercept the flow of logic:
As you can see, logging persistence of the operation log is performed after the method has finished executing. When the method throws an exception, it catches it first and then throws it after the operation log is persisted. The custom function that was parsed in advance is evaluated before the business method is executed, addressing the need for query modification mentioned earlier.
4.2.2 Parsing logic
The template parsing
Spring 3 provides a very powerful feature: Spring EL, SpEL is the core foundation module for expression evaluation in the Spring product, which can be used independently of Spring itself. Here’s an example:
public static void main(String[] args) {
SpelExpressionParser parser = new SpelExpressionParser();
Expression expression = parser.parseExpression("#root.purchaseName");
Order order = new Order();
order.setPurchaseName("Zhang");
System.out.println(expression.getValue(order));
}
Copy the code
This method prints “zhang SAN”. LogRecord parses the following class diagram:
Parsing the core classes: LogRecordValueParser encapsulated inside a custom function and SpEL LogRecordExpressionEvaluator parsing class.
public class LogRecordExpressionEvaluator extends CachedExpressionEvaluator {
private Map<ExpressionKey, Expression> expressionCache = new ConcurrentHashMap<>(64);
private final Map<AnnotatedElementKey, Method> targetMethodCache = new ConcurrentHashMap<>(64);
public String parseExpression(String conditionExpression, AnnotatedElementKey methodKey, EvaluationContext evalContext) {
return getExpression(this.expressionCache, methodKey, conditionExpression).getValue(evalContext, String.class); }}Copy the code
LogRecordExpressionEvaluator inherited from CachedExpressionEvaluator class, this class has two Map, one is expressionCache targetMethodCache. As can be seen in the above example, SpEL will be parsed into an Expression and the corresponding value will be obtained according to the passed Object. Therefore, expressionCache is used to cache the corresponding relationship between the method, Expression and SpEL Expression. Let SpEL expressions added to method annotations be parsed only once. The targetMethodCache below is used to cache objects passed into the Expression. The core parsing logic is the last line above.
getExpression(this.expressionCache, methodKey, conditionExpression).getValue(evalContext, String.class);
Copy the code
The getExpression method gets an instance of parsed Expression of the Expression on the @LogRecordanNotation annotation from expressionCache and calls the getValue method, GetValue passes in an evalContext like the order object in the above example. The implementation of Context is described below.
Log context implementation
The following example puts variables into the LogRecordContext, and the SpEL expression can resolve parameters that are not present in the method. As you can see from the SpEL example above, To parse the expression’s value, put both the method parameters and the LogRecordContext variables into the Object of SpEL’s getValue method. Here’s how to do it:
@logRecord (content = "@logRecord") From "{deveryUser {# oldDeliveryUserId}}", change to "{deveryUser {# request. GetUserId ()}}", "bizNo =" # request. GetDeliveryOrderNo () ")
public void modifyAddress(updateDeliveryRequest request){
// Where is the original address
LogRecordContext.putVariable("oldDeliveryUserId", DeliveryService.queryOldDeliveryUserId(request.getDeliveryOrderNo()));
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
A EvaluationContext is created in LogRecordValueParser to parse the SpEL method parameters and variables in the Context. The relevant codes are as follows:
EvaluationContext evaluationContext = expressionEvaluator.createEvaluationContext(method, args, targetClass, ret, errorMsg, beanFactory);
Copy the code
EvaluationContext is the EvaluationContext object passed in to the getValue method when parsing. The following is LogRecordEvaluationContext object inheritance system:
LogRecordEvaluationContext did three things:
- Put all the method arguments into the RootObject that SpEL parses.
- Put all the variables in the LogRecordContext into RootObject.
- Put both the return value of the method and the ErrorMsg into RootObject.
LogRecordEvaluationContext code is as follows:
public class LogRecordEvaluationContext extends MethodBasedEvaluationContext {
public LogRecordEvaluationContext(Object rootObject, Method method, Object[] arguments, ParameterNameDiscoverer parameterNameDiscoverer, Object ret, String errorMsg) {
// Put the method arguments into the SpEL parsed RootObject
super(rootObject, method, arguments, parameterNameDiscoverer);
// Put all the variables in LogRecordContext into RootObject
Map<String, Object> variables = LogRecordContext.getVariables();
if(variables ! =null && variables.size() > 0) {
for(Map.Entry<String, Object> entry : variables.entrySet()) { setVariable(entry.getKey(), entry.getValue()); }}// Put the method return value and ErrorMsg into RootObject
setVariable("_ret", ret);
setVariable("_errorMsg", errorMsg); }}Copy the code
The following is an implementation of LogRecordContext. This class holds a stack with a ThreadLocal variable. Inside the stack is a Map, which corresponds to the name of the variable and its value.
public class LogRecordContext {
private static final InheritableThreadLocal<Stack<Map<String, Object>>> variableMapStack = new InheritableThreadLocal<>();
// other ellipses....
}
Copy the code
InheritableThreadLocal is used above, so using LogRecordContext in the thread pool scenario is problematic. If thread pools are supported, alibaba’s open source TTL framework can be used. Why not set up a ThreadLocal<Map<String, Object>> Object instead of a Stack structure? Let’s see what the reason for doing that is.
@logRecord (content = "@logRecord") From "{deveryUser {# oldDeliveryUserId}}", change to "{deveryUser {# request. GetUserId ()}}", "bizNo =" # request. GetDeliveryOrderNo () ")
public void modifyAddress(updateDeliveryRequest request){
// Where is the original address
LogRecordContext.putVariable("oldDeliveryUserId", DeliveryService.queryOldDeliveryUserId(request.getDeliveryOrderNo()));
// Update the sender's phone number, addressee, address
doUpdate(request);
}
Copy the code
The execution flow of the above code is as follows:
This doesn’t seem to be a problem, but when the LogRecordAnnotation method is nested within another method using the LogRecordAnnotation method, the flow looks like this:
ThreadLocal<Map<String, Object>> Map has already been released, so method 1 can’t fetch the corresponding variable. Method 1 and method 2 share the same variable Map. Another problem is that if method 2 sets the same variable as method 1, the two methods’ variables will overwrite each other. So ultimately the variable lifecycle of LogRecordContext needs to look like this:
LogRecordContext pushes a Map each time a method executes, and pops the Map after the method executes to avoid variable sharing and overwriting problems.
Default operator logic
In the LogRecordInterceptor interface IOperatorGetService, this interface gets the current user. Here is the definition of the interface:
public interface IOperatorGetService {
/ * * * to login in external access to current users, such as UserContext. GetCurrentUser () * *@returnConverting to Operator returns */
Operator getUser(a);
}
Copy the code
Here is an example of getting a user from a user context:
public class DefaultOperatorGetServiceImpl implements IOperatorGetService {
@Override
public Operator getUser(a) {
//UserUtils is the method to get the user context
return Optional.ofNullable(UserUtils.getUser())
.map(a -> new Operator(a.getName(), a.getLogin()))
.orElseThrow(()->new IllegalArgumentException("user is null")); }}Copy the code
When the component parses an operator, it determines whether the operator is empty on the annotation. If not, we get it from the getUser method of IOperatorGetService. If none is retrieved, an error is reported.
String realOperatorId = "";
if (StringUtils.isEmpty(operatorId)) {
if (operatorGetService.getUser() == null || StringUtils.isEmpty(operatorGetService.getUser().getOperatorId())) {
throw new IllegalArgumentException("user is null");
}
realOperatorId = operatorGetService.getUser().getOperatorId();
} else {
spElTemplates = Lists.newArrayList(bizKey, bizNo, action, operatorId, detail);
}
Copy the code
Custom function logic
The class diagram for the custom function is as follows:
Here is the interface definition for IParseFunction: The executeBefore function represents whether the custom function is parsed before the business code is executed, before the query modification mentioned above.
public interface IParseFunction {
default boolean executeBefore(a){
return false;
}
String functionName(a);
String apply(String value);
}
Copy the code
The ParseFunctionFactory code is relatively simple, and its function is to inject all iParseFunctions into the function factory.
public class ParseFunctionFactory {
private Map<String, IParseFunction> allFunctionMap;
public ParseFunctionFactory(List<IParseFunction> parseFunctions) {
if (CollectionUtils.isEmpty(parseFunctions)) {
return;
}
allFunctionMap = new HashMap<>();
for (IParseFunction parseFunction : parseFunctions) {
if (StringUtils.isEmpty(parseFunction.functionName())) {
continue; } allFunctionMap.put(parseFunction.functionName(), parseFunction); }}public IParseFunction getFunction(String functionName) {
return allFunctionMap.get(functionName);
}
public boolean isBeforeFunction(String functionName) {
returnallFunctionMap.get(functionName) ! =null&& allFunctionMap.get(functionName).executeBefore(); }}Copy the code
DefaultFunctionServiceImpl logic is based on the incoming function name functionName to find corresponding IParseFunction, then the parameter passed to IParseFunction apply method finally return the value of the function.
public class DefaultFunctionServiceImpl implements IFunctionService {
private final ParseFunctionFactory parseFunctionFactory;
public DefaultFunctionServiceImpl(ParseFunctionFactory parseFunctionFactory) {
this.parseFunctionFactory = parseFunctionFactory;
}
@Override
public String apply(String functionName, String value) {
IParseFunction function = parseFunctionFactory.getFunction(functionName);
if (function == null) {
return value;
}
return function.apply(value);
}
@Override
public boolean beforeFunction(String functionName) {
returnparseFunctionFactory.isBeforeFunction(functionName); }}Copy the code
4.2.3 Log Persistence Logic
ILogRecordService is also referenced in the LogRecordInterceptor code. This Service mainly contains the logging interface.
public interface ILogRecordService {
/** * save log **@paramLogRecord Log entity */
void record(LogRecord logRecord);
}
Copy the code
Businesses can implement this save interface and then store logs on any storage medium. Here is an example of saving log files through log.info as described in Section 2.2. A service can save them asynchronously or synchronously. You can place them in a transaction to ensure consistency between operation logs and services, or create a new transaction to ensure that errors in logs do not affect service transactions. Services can be saved in Elasticsearch, database, or file. You can query logs based on the log structure and log storage.
@Slf4j
public class DefaultLogRecordServiceImpl implements ILogRecordService {
@Override
// @Transactional(propagation = Propagation.REQUIRES_NEW)
public void record(LogRecord logRecord) {
log.info(LogRecord ={}", logRecord); }}Copy the code
4.2.4 Starter logic encapsulation
Now that the logic is covered, you need to assemble the components and let the user use them. All you need to do to use this component is add an annotation @enablelogRecord (Tenant = “com.mzt.test”) to the Springboot entry. Tenant represents a tenant and is used by multiple tenants.
@SpringBootApplication(exclude = DataSourceAutoConfiguration.class)
@EnableTransactionManagement
@EnableLogRecord(tenant = "com.mzt.test")
public class Main {
public static void main(String[] args) { SpringApplication.run(Main.class, args); }}Copy the code
Again see EnableLogRecord code, the code Import LogRecordConfigureSelector. Class, In LogRecordConfigureSelector class exposes LogRecordProxyAutoConfiguration class.
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Import(LogRecordConfigureSelector.class)
public @interface EnableLogRecord {
String tenant(a);
AdviceMode mode(a) default AdviceMode.PROXY;
}
Copy the code
LogRecordProxyAutoConfiguration is assembling components of core classes above, the code is as follows:
@Configuration
@Slf4j
public class LogRecordProxyAutoConfiguration implements ImportAware {
private AnnotationAttributes enableLogRecord;
@Bean
@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public LogRecordOperationSource logRecordOperationSource(a) {
return new LogRecordOperationSource();
}
@Bean
@ConditionalOnMissingBean(IFunctionService.class)
public IFunctionService functionService(ParseFunctionFactory parseFunctionFactory) {
return new DefaultFunctionServiceImpl(parseFunctionFactory);
}
@Bean
public ParseFunctionFactory parseFunctionFactory(@Autowired List<IParseFunction> parseFunctions) {
return new ParseFunctionFactory(parseFunctions);
}
@Bean
@ConditionalOnMissingBean(IParseFunction.class)
public DefaultParseFunction parseFunction(a) {
return new DefaultParseFunction();
}
@Bean
@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public BeanFactoryLogRecordAdvisor logRecordAdvisor(IFunctionService functionService) {
BeanFactoryLogRecordAdvisor advisor =
new BeanFactoryLogRecordAdvisor();
advisor.setLogRecordOperationSource(logRecordOperationSource());
advisor.setAdvice(logRecordInterceptor(functionService));
return advisor;
}
@Bean
@Role(BeanDefinition.ROLE_INFRASTRUCTURE)
public LogRecordInterceptor logRecordInterceptor(IFunctionService functionService) {
LogRecordInterceptor interceptor = new LogRecordInterceptor();
interceptor.setLogRecordOperationSource(logRecordOperationSource());
interceptor.setTenant(enableLogRecord.getString("tenant"));
interceptor.setFunctionService(functionService);
return interceptor;
}
@Bean
@ConditionalOnMissingBean(IOperatorGetService.class)
@Role(BeanDefinition.ROLE_APPLICATION)
public IOperatorGetService operatorGetService(a) {
return new DefaultOperatorGetServiceImpl();
}
@Bean
@ConditionalOnMissingBean(ILogRecordService.class)
@Role(BeanDefinition.ROLE_APPLICATION)
public ILogRecordService recordService(a) {
return new DefaultLogRecordServiceImpl();
}
@Override
public void setImportMetadata(AnnotationMetadata importMetadata) {
this.enableLogRecord = AnnotationAttributes.fromMap(
importMetadata.getAnnotationAttributes(EnableLogRecord.class.getName(), false));
if (this.enableLogRecord == null) {
log.info("@EnableCaching is not present on importing class"); }}}Copy the code
This class inherits ImportAware to get the tenant attributes on EnableLogRecord. This class uses the logRecordAdvisor and logRecordInterceptor variables to incorporate AOP. The custom function is also injected into the logRecordAdvisor.
External extension classes: IOperatorGetService, ILogRecordService, and IParseFunction. The business can implement the interface itself, and because the @conditionalonmissingBean is configured, the user’s implementation class overrides the default implementation within the component.
5. To summarize
This article introduces common ways to write operation logs and how to make the implementation of operation logs easier and easier to understand. Through four modules of the component, the concrete realization of the component is introduced. For the above component introduction, if you have any questions, you are welcome to leave a message at the end of the article, we will answer your questions.
6. Author introduction
Zhan Tong, engineer of Basic R&D Platform/R&D Quality and Efficiency Department, joined Meituan in 2020.
7. Reference materials
- Canal
- spring-framework
- Spring Expression Language (SpEL)
- InheritableThreadLocal, InheritableThreadLocal, and TransmittableThreadLocal
8. Job information
Meituan r&d quality and efficiency department is committed to building a first-class continuous delivery platform in the industry. Now we are looking for engineers related to basic components in Beijing/Shanghai. All interested students are welcome to join us. You can send your resume to: [email protected] (email subject: Meituan R&D Quality and Efficiency Department).
Read more technical articles from meituan’s technical team
Front end | | algorithm back-end | | | data security operations | iOS | Android | test
| in the public bar menu dialog reply goodies for [2020], [2019] special purchases, goodies for [2018], [2017] special purchases such as keywords, to view Meituan technology team calendar year essay collection.
| this paper Meituan produced by the technical team, the copyright ownership Meituan. You are welcome to reprint or use the content of this article for non-commercial purposes such as sharing and communication. Please mark “Content reprinted from Meituan Technical team”. This article shall not be reproduced or used commercially without permission. For any commercial activity, please send an email to [email protected] for authorization.