This is the 21st day of my participation in the First Challenge 2022
Previously, we learned distributed transactions: Microservices from getting started with Spring Cloud Gateway: Service Gateway: Microservices As of today, we are finally coming to the end of our micro services series. Unsurprisingly, distributed logging will be the last of the microservices series at this stage, and may be supplemented in the future.
Without further ado, let’s begin the final chapter of the current micro services series.
1. Basic introduction
1. What is distributed logging
I believe that you often encounter a lot of exceptions and error messages in the production environment in your daily work, so you need to check the log information to troubleshoot errors. Moreover, this project is a distributed application, and logs are scattered on different devices. If you manage dozens or hundreds of servers, you’ll also use the traditional method of logging on to each machine in turn. Does this feel cumbersome and inefficient? So we use centralized log management, distributed log is to collect, track and process large-scale log data.
Generally, we need to perform log analysis scenarios: directly grep and awk in log files to obtain the desired information. However, in large-scale scenarios, this method is inefficient. Problems include how to archive too many logs, how to search text too slowly, and how to query multi-dimensional logs. Centralized log management is required. Logs from all servers are collected and summarized. The common solution is to set up a centralized log collection system to collect, manage, and access logs on all nodes in a unified manner.
2. ELK distributed logs
ELK is short for Elasticsearch, Logstash, and Kibana.
Elasticsearch is a Java-based, open source, distributed search engine. It features distributed, zero configuration, automatic discovery, index sharding, index copy, restful interface, multiple data sources, and automatic search load.
Based on NodeJS, Kibana is also an open source and free tool that provides a log analysis friendly Web interface for Logstash and ElasticSearch to aggregate, analyze and search important data logs.
Based on Java, Logstash is an open source tool for collecting, analyzing, and storing logs.
The following isELK
How does it work:
For example, our project is deployed on 3 servers. Normally, Elasticsearch should be installed on all 3 servers, Kibana should be installed on the primary server, and Logstash should be installed on one of the node servers.
So let’s set it up.
2. Build ELK
During the building process, you must pay attention to the corresponding versions. If the versions do not correspond, various startup errors will appear.
Version Description:
CentOS 7.9
JDK 1.8
Elasticsearch - 8.0.0
Logstash - 8.0.0
Kibana - 8.0.0
1. Elasticsearch
1.1, introduction to
ElasticSearch is a Lucene-based search server. It provides a distributed multi – user – capable full – text search engine based on RESTful Web interface. Elasticsearch, developed in Java and released as open source under the Apache license, is a popular enterprise-level search engine.
1.2. Installation and configuration
The first go to the website to Download: Download Elasticsearch | Elastic, choose Linux version
PS: The download speed of the official website is quite fast
In this example, I used CentOS 7 in the VM. First, I copied the tar package to /usr/local.
- Unpack the
Tar - ZXVF elasticsearch 8.0.0 - Linux - x86_64. Tar. GzCopy the code
- Modify the configuration
CD/usr/local/elasticsearch - 8.0.0 / config/vim elasticsearch. YmlCopy the code
cluster.name: my-application
node.name: node-1
path.data: /home/esuser/data # Directory for storing data files
path.logs: /home/esuser/logs # Directory for storing log files
network.host: 0.0. 0. 0 # Open to all IP
http.port: 9200 # port
discovery.seed_hosts: ["192.168.20.105"]
cluster.initial_master_nodes: ["node-1"]
Copy the code
Note: the path.data and path.logs directories need to be created manually mkdir
Note: These configurations are commented by default. Open comments after modification
In addition, since this is just a test study, we can turn off the security-related configuration and change it from True to false
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: false
Copy the code
Save the Settings and exit the editing mode
- Example Create the esuser user
Because ElasticSearch does not support direct operations by Root users, we need to create an esuser user
#Create a user
useradd esuser
#Set the password
passwd esuser
#Granting User RightsChown -r esuser: esuser/usr/local/elasticsearch - 8.0.0Copy the code
1.3, start,
Switch the user to esuser
Su esuser CD /usr/local/elasticSearch-8.0.0/bin. /elasticsearch -d & tail -f /home/esuser/logs/my-application.log # Start in the background and view the startup logsCopy the code
If there is no error, started hosting the browser to access the address: http://192.168.20.105:9200/
192.168.20.105 is the static IP address of the VM. The following figure shows the successful access (note the firewall).
2. Logstash
2.1, introduction to
Logstash is an open source server-side data processing pipeline that captures data from multiple sources at the same time, transforms it, and sends it to a repository (ElasticSearch).
2.2 installation and configuration
Official website to Download address: Download Logstash Free | Get Started Now | Elastic
Copy the downloaded tar package to /usr/local on the VM.
- Unpack the
Tar - ZXVF logstash 8.0.0 - Linux - x86_64. Tar. GzCopy the code
- Adding a Configuration File
CD /usr/local/logstuck-8.0.0/bin touch logstuck-elasticSearch. conf vim logstuck-elasticSearch. confCopy the code
Input {stdin {}} output {elasticSearch {hosts => '192.168.20.105:9200'
}
stdout {
codec => rubydebug
}
}
Copy the code
Note the address of elasticSearch inside
2.3, start,
In this case, you are under user root
CD /usr/local/logstuck-8.0.0/bin nohip. /logstash -f logstuck-elasticSearch. conf & # Start backendCopy the code
Note: There is an error when executing the background startup command to check the nohup. Out log, but it seems to have been successfully started, so the use is not affected. No error will be reported if nohup is not used.
Use JPS to view all current Java processes
Found successful startup.
3. Kibana
3.1, introduction to
Kibana is an open source data analysis and visualization platform designed to work with Elasticsearch as part of the Elastic Stack. You can use Kibana to search, view, and interact with data in the Elasticsearch index. You can easily use charts, tables and maps to analyze and present data in a variety of ways.
3.2 installation and configuration
Official website to Download address: Download Kibana Free | Get Started Now | Elastic
Copy the downloaded tar package to /usr/local on the VM.
- Unpack the
Tar - ZXVF kibana 8.0.0 - Linux - x86_64. Tar. GzCopy the code
- Modify the configuration
CD/usr/local/kibana - 8.0.0 / config/vim kibana. YmlCopy the code
server.port: 5601 # port
server.host: "0.0.0.0" Open access to all IP addresses
server.name: "my-kibana"
elasticsearch.hosts: ["http://192.168.20.105:9200"] # es address
i18n.locale: "zh-CN" # localization
Copy the code
Note: These configurations are commented by default. Open comments after modification
- Kibana can’t be launched as root either, so we’ll use the esuser user we created earlier
Chown -r esuser: esuser/usr/local/kibana - 8.0.0Copy the code
3.3, start,
Switch to the esuser user
su esuser
Copy the code
The background to start
/ usr/local/kibana - 8.0.0 / bin &Copy the code
Start is successful, the browser to access the address: http://192.168.20.105:5601/
If the following page is displayed, the startup is successful
We click apply sample data, click Add Data
Click on the Discover
You can see the sample data
At this point, the software required for our distributed logging has been installed, configured and started successfully
As for how to use it, I feel like I need an article to write. I don’t want to do this at this stage, I will add it later!