Log is an important means of analysis of problem come from line, usually we will put the log output to the console or a local file, troubleshoot problems by keyword search according to local log, but a growing number of companies, adopts distributed architecture in the project development, log records to multiple servers or file, analysis problems may need to look at multiple log files to location problem, If the related project is not a team maintenance when communication costs are skyrocketed. It is an effective way to analyze distributed system problems by aggregating logs of various systems and linking a transaction request through keywords.
ELK(ElasticSearch + LogStash + Kibana) is a common log analysis system, including log collection (Logstash), log Storage search (ElasticSearch), display query (Kibana). We used ELK as the log storage analysis system and linked related logs by assigning requestId to each request. The specific structure of ELK is shown in the figure below:
Install the Java environment before installing the LogStash. Download the JDK: In oracle’s official website, http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html according to the download version of the operating system corresponding to the JDK installation package, Jkg-8u101-linux-x64.tar. gz upload file to server and execute: # mkdir /usr/local/java # tar -zxf jdK-8u45-linux-x64.tar. gz -c /usr/local/java/ configure the Java environment
Export JAVA_HOME=/usr/local/java/jdk1.8.0_45 export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATHCopy the code
Run the Java -version command. If the Java version information is displayed, JDK configuration is successful.
Download logstash: Wget https://download.elastic.co/logstash/logstash/logstash-2.4.0.tar.gz tar – XZVF logstash – 2.4.0. Tar. Gz into the installation directory: CD #{dir}/logstash-2.4.0 Create a logstash test configuration file: vim test.conf
input {
stdin { }
}
output {
stdout {
codec => rubydebug {}
}
}
Copy the code
Run the logstash test: bin/logstash -f test.conf is displayed
To prove that logstash is up, say Hello world
Since we configured the content to say that the console outputs the log content, displaying the above format is success. 2, Install elasticSearch wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2 Elasticsearch-2.4.0.tar. gz CD #{dir}/ elasticSearch-2.4.0 vim config/ elasticSearch. yml
Path. data: /data/es # Data path path.logs: /data/logs/es # Log path net. host: local address http.port: 9200 # portCopy the code
Configure the execution user and directory:
Groupadd elsearch useradd elsearch -g elsearch -p elsearch chown -r elsearch:elsearch elsearch -2.4.0 mkdir groupadd elsearch useradd elsearch -g elsearch -p elsearch chown -r elsearch:elsearch elsearch -2.4.0 mkdir /data/es mkdir /data/logs/es chown -R elsearch:elsearch /data/es chown -R elsearch:elsearch /data/logs/esCopy the code
Elasticsearch: su elsearch bin/elasticsearch
Integrate logstash and ElasticSearch and modify the logstash configuration to:
input {
stdin { }
}
output {
elasticsearch {
hosts => "elasticsearchIP:9200"
index => "logstash-test"
}
stdout {
codec => rubydebug {}
}
}
Copy the code
Start the logstash again and type any text: “Hello ElasticSearch”
The input text is found using ElasticSearch, and the integration is successful. The native interface of ElasticSearch is not easy to query and display, so let’s configure the query analysis tool Kibana. 3, Install Kibana download installation package: Wget https://download.elastic.co/kibana/kibana/kibana-4.6.1-linux-x86_64.tar.gz decompression kibana, Port: 8601 # IP server.host: “Native IP address # elasticsearch elasticsearch. Url:” http://elasticsearchIP:9200 “start the program: Bin/Kibana access configuration IP :port, in Discover search just entered characters, the content is very beautiful display.
At this point our ELK environment has been configured and we are using our Java Web project trial logs in ELK. In order to test the continuity of the distributed system log, we let the project call n times, and deploy two projects, call each other, the key code is as follows:
@RequestMapping("http_client") @Controller public class HttpClientTestController { @Autowired private HttpClientTestBo httpClientTestBo; @RequestMapping(method = RequestMethod.POST) @ResponseBody public BaseResult doPost(@RequestBody HttpClientTestResult result) { HttpClientTestResult testPost = httpClientTestBo.testPost(result); return testPost; }}Copy the code
@Service
public class HttpClientTestBo {
private static Logger logger = LoggerFactory.getLogger(HttpClientTestBo.class);
@Value("${test_http_client_url}")
private String testHttpClientUrl;
public HttpClientTestResult testPost(HttpClientTestResult result) {
logger.info(JSONObject.toJSONString(result));
result.setCount(result.getCount() + 1);
if (result.getCount() <= 3) {
Map<String, String> headerMap = new HashMap<String, String>();
String requestId = RequestIdUtil.requestIdThreadLocal.get();
headerMap.put(RequestIdUtil.REQUEST_ID_KEY, requestId);
Map<String, String> paramMap = new HashMap<String, String>();
paramMap.put("status", result.getStatus() + "");
paramMap.put("errorCode", result.getErrorCode());
paramMap.put("message", result.getMessage());
paramMap.put("count", result.getCount() + "");
String resultString = JsonHttpClientUtil.post(testHttpClientUrl, headerMap, paramMap, "UTF-8");
logger.info(resultString);
}
logger.info(JSONObject.toJSONString(result));
return result;
}
}
Copy the code
To indicate the linkability of the call we configure the filter of requestId in web.xml to create requestId:
requestIdFilter
com.virxue.baseweb.utils.RequestIdFilter
requestIdFilter
/*
Copy the code
public class RequestIdFilter implements Filter {
private static final Logger logger = LoggerFactory.getLogger(RequestIdFilter.class);
/* (non-Javadoc)
* @see javax.servlet.Filter#init(javax.servlet.FilterConfig)
*/
public void init(FilterConfig filterConfig) throws ServletException {
logger.info("RequestIdFilter init");
}
/* (non-Javadoc)
* @see javax.servlet.Filter#doFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse, javax.servlet.FilterChain)
*/
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException,
ServletException {
String requestId = RequestIdUtil.getRequestId((HttpServletRequest) request);
MDC.put("requestId", requestId);
chain.doFilter(request, response);
RequestIdUtil.requestIdThreadLocal.remove();
MDC.remove("requestId");
}
/* (non-Javadoc)
* @see javax.servlet.Filter#destroy()
*/
public void destroy() {
}
}
Copy the code
public class RequestIdUtil { public static final String REQUEST_ID_KEY = "requestId"; public static ThreadLocal<String> requestIdThreadLocal = new ThreadLocal<String>(); private static final Logger logger = LoggerFactory.getLogger(RequestIdUtil.class); RequestId * @title getRequestId * @description TODO * @return ** @author sunhaojie [email protected] * @date */ public static String getRequestId(HttpServletRequest Request) {String requestId = null; String parameterRequestId = request.getParameter(REQUEST_ID_KEY); String headerRequestId = request.getHeader(REQUEST_ID_KEY); If (parameterRequestId == null && headerRequestId == null) {logger.info(" Request parameter and header have no requestId input arguments "); requestId = UUID.randomUUID().toString(); } else { requestId = parameterRequestId ! = null ? parameterRequestId : headerRequestId; } requestIdThreadLocal.set(requestId); return requestId; }}Copy the code
We use Logback as a plug-in for logging output, and use its MDC class to output requestId anywhere without invasion. The configuration is as follows:
UTF-8
${log_base}/java-base-web.log
${log_base}/java-base-web-%d{yyyy-MM-dd}-%i.log
10
200MB
%d^|^%X{requestId}^|^%-5level^|^%logger{36}%M^|^%msg%n
Copy the code
The log format using the “^ | ^” as a separator, convenient logstash shard. Deploy two Web projects on the test server and change the log output location and the URL call link to make the projects call each other.
5. Add stdin.conf to stash stash stash file:
input { file { path => ["/data/logs/java-base-web1/java-base-web.log", "/data/logs/java-base-web2/java-base-web.log"] type => "logs" start_position => "beginning" codec => multiline { pattern = > "^ \ [\ d {4} \ d {1, 2} - \ d {1, 2} \ s \ d {1, 2} : \ d {1, 2} : \ d {1, 2}" negate = > true what = > "next" filter {mutate {}}} split=>["message","^|^"] add_field => { "messageJson" => "{datetime:%{[message][0]}, requestId:%{[message][1]},level:%{[message][2]}, class:%{[message][3]}, Content :%{[message][4]}}"} remove_field => ["message"]}} Output {elasticSearch {hosts => "10.160.110.48:9200" index => "logstash-${type}" } stdout { codec => rubydebug {} } }Copy the code
Path is the address of the log file. Codec => multiline to handle Exception logs, so that the Exception content and the Exception header are separated in the same log; Filter indicates log content segmentation. Log content is in JSON format for easy query and analysis.
Test it out:
POSTMan is used to simulate the call, and a message is displayed indicating that the server is abnormal: Search for “Abnormal call interface” on the interface, and two data are displayed.
A requestId search using one of the pieces of data shows how requests are executed in and between systems, making it easy to troubleshoot errors.
Here we experiment using ELK configuration log analysis, many details need better processing, welcome more students to exchange learning.
ELK(ElasticSearch + LogStash + Kibana) implements log analysis for Java distributed systems