background
At present, micro-service and distributed service architecture are widely used in Internet companies, and Ali Dubbo is widely used in China. However, Dubbo is only an RPC framework, lacking full-link tracking components, unlike SpringCloud, which is an ecology and contains various distributed components.
Our company is also in dubbo, distributed services, although have solved the problem of horizontal extension, but also brings other new problems, such as the link to track this piece, if it is a single application, all the log in a project, is now a service cluster, the invocation of the relationship between individual service is complex, the same service specific is which one machine, The route is based on the load balancing algorithm, so it is difficult to check logs. Even with the elk centralized logging system, due to the higher number of concurrent requests at the same time, different request logs are mixed together, so it’s best can link to track, especially the link tracking, which is more than just find the current thread log of all in the same project, but also to find the same between different project request all log.
With full link tracing, each service node of a distributed service can be linked together for better troubleshooting and monitoring of the system.
The principle is based on the industry-common distributed trace ID, which is passed from the entry point all the way down to each service log containing the distributed service trace ID.
Within threads and across processes
To solve two big problems,
1, solve the same process within the same thread link tracing based on Java ThreadLocal variable ThreadLocal implementation.
2. The solution to cross-process link tracing is to transfer the link ID between dubbo service consumers and service providers, based on dubbo context RpcContext.
Architecture diagram
Core design idea
The so-called in-process link tracing refers to the link tracing within service consumers or service providers. The implementation principle is based on 1) extending the function of logging framework (such as Logback) and 2) automatically carrying the link ID every time logs are printed
2. Cross-process link tracing the so-called cross-process means that service consumers and service providers are two processes running on different machines. The implementation principle is based on: 1) User-defined Dubbo interceptor to implement dubbo Filter interface; Depending on whether the process of the current request thread is a service consumer or a service provider, the table does different business logic processing. 2) Service consumers write data to a log ThreadLocal; In the interceptor, write data to the Dubbo context RpcContext. 3) The service provider reads data from the Dubbo context RpcContext in the interceptor; Write data to log ThreadLocal.
conclusion
1. Implement thread link tracking based on ThreadLocal.
2, based on the extended function of Dubbo, to achieve cross-process link tracing, each process must introduce link tracing function to achieve full link tracing.
3. The link tracing ID is written in the log stamp field of each log. By searching the link ID in the ELK centralized log system, the complete log information of a request can be found. The link ID log stamp field needs to be extended by the log framework, for example, SLF + Logback.
code
Across processes
Custom extension dubbo interceptor
/** * dubbo log interceptor, function 1. Print consumer and provider input arguments and responses based on dubbo interceptor implementation 2. Full link tracing is implemented based on log interceptors (thread-local variables) and dubbo context. (note: Each service in full link tracing is configured with this interceptor, */ @activate (group = {Constants.PROVIDER, Constants.CONSUMER}) public class AccessLogExtFilter implements Filter { public Result invoke(Invoker<? > invoker, Invocation inv) throws RpcException { ... handleTraceNo(); . } /** * Pass log pot-sequence data between consumer and provider, */ private void handleTraceNo() {Boolean isConsumer = rpcContext.getContext ().isConsumerSide(); If (isConsumer) {// If (isConsumer) Write data to the dubbo context RpcContext. GetContext () setAttachment (" xxx_TRACE_NO, "LogPreFixConverter. GetCurrentThreadLogPreFix (" ")); String traceNo = rpcContext.getContext ().getAttachment("xxx_TRACE_NO"); If (stringutils.isblank (traceNo)) {traceNo = uuid.randomuuid ().toString(); if (stringutils.isblank (traceNo)) {traceNo = uuid.randomuuid ().tostring (); } LogPreFixConverter.setLogPreFixNoAppendThread(traceNo); }}Copy the code
Within the thread
Extended logging framework LogBack
1) Customize field log configuration files
<! - | space convenient for unified log format output elk log tool collects processing, the default log level of | | log format is time stamp | | code path extension json (chart data collection, No fields blank) | log after desensitization content - > < property name = "logback. The pattern" value = "% date {yyyy - MM - dd HH:mm:ss.SSS}|%-5level|%logPreFix|%class.%method:%line|%extjson|%sensitiveMsg%n" />Copy the code
LogPreFix is a custom field/log stamp/full link ID.
2) Implement custom field processing classes
public class LogPreFixConverter extends ClassicConverter { private static final ThreadLocal<String> logPreFixThreadLocal = new ThreadLocal<String>(); // The implementation here is a bit more complicated, based on thread local variables to achieve full link tracking. @override public String convert(ILoggingEvent event) { Is mapping field variables and the value of the variable try {return getCurrentThreadLogPreFix (event) getFormattedMessage ()); } catch (Throwable t) {system.err.println ("LogPreFixConverter handle exception "+ t.goetmessage ()); } return "errorLogPreFix"; } /** @param MSG * find orderID tradESN and other keywords in the entry log. If you don't find the default generated a * @ return * / public String getCurrentThreadLogPreFix (String MSG) {String logPreFixStr = logPreFixThreadLocal.get(); If (stringutils.isblank (logPrefixThreadLocal.get ())) {// key:value Value may be null; if (stringutils.isblank (logPrefixThreadLocal-get ())) {// key:value Value may be null; / / may end with a space, may for the final end of a character String rexp = "(orderid | tradesn | batchid) : (\ \ s |. +? (\\s|.$))"; Pattern Pattern = pattern.compile (reXP, pattern.case_insensitive); // Insensitive data is found on the Pattern server. Matcher matcher = pattern.matcher(msg); If (matcher.find()) {// Get key:value logPreFixStr = matcher.group(); Substring (logPrefixstr.indexof (":") + 1).replace("\"", ""); // Get value if value is JSON. } if (StringUtils.isBlank(logPreFixStr)) { logPreFixStr = generateId(12); } logPreFixStr = new StringBuilder().appEnd (logPreFixStr).append("-") .append(Thread.currentThread().getId()).toString(); logPreFixThreadLocal.set(logPreFixStr); } return logPreFixStr; }Copy the code