1.Zookeeper
Let’s test Zookeeper with a simple command:
echo ruok|nc localhost 2181
Copy the code
You should be able to see:
imok
Copy the code
2.Kafka
Kafka is a distributed streaming platform developed by Linked and open source, similar to a message queue. Kafka’s strengths are the ability to build real-time streaming data pipelines, reliably capture data between systems and applications, and transform data streams in real time. Kafka can be used in many scenarios, especially on high-throughput systems. First, let’s understand the basic concepts of Kafka:
-
Topic Kafka categorizes messages, and each type of message is called a Topic.
-
A Producer is known as a subject Producer.
-
Consumers, the objects that subscribe to messages and process them are called topic Consumers
-
Broker, published messages are stored in a set of servers called a Kafka cluster. Each server in the cluster is a Broker. Consumers can subscribe to one or more topics and pull data from the Broker to consume these published messages.
1. Create a theme
We can log in to the Kafka container and do some simple tests. Log in to the commands in the container:
docker-compose exec kafka bash
Copy the code
Create a Topic called test with only one partition and one backup:
kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic test
Copy the code
Once created, you can view the topic information by running the following command:
kafka-topics.sh --list --zookeeper zookeeper:2181
Copy the code
2. Send the MESSAGE
Kafka provides, by default, a command-line tool for sending messages. Producer is a single line of messages, and Producer runs Kafka and inputs some information:
kafka-console-producer.sh --broker-list localhost:9092 --topic test
Copy the code
3. Consumer news
Kafka also provides a command line tool that consumes messages and outputs stored information.
kafka-console-consumer.sh --zookeeper zookeeper:2181 --topic test --from-beginning
Copy the code
3. Elasticsearch
The ElasticStack consists of three versions of the ElasticStack, and is originally intended to be a version 5.0 version of the ElasticStack. But now it has evolved to version 6.0, which is what we are using this time. See [Elasticsearch][4] for more details on the update, but I’m only talking about the basic concepts here.
Elasticsearch can easily filter and sort queries because, like MongoDB, it stores entire documents and indexes them by content, and the document format is JSON friendly. For example, a simple user object can be represented like this:
{
"name": "Xiao Ming"."phone": "10086"."age": "25"."info": {
"site" : "https://sunnyshift.com"
"likes": ["games"."music"]}}Copy the code
In Elasticsearch, there are four terms: idnex, type, document, field, Elasticsearch can contain multiple indexes, each of which can contain multiple types, each of which can contain multiple documents, and each of which can contain multiple fields. Just think of him as a database.
Create an index and insert some data:
PUT /banji/xuesheng/1
{
"name": "Xiao Ming"."age": "12"
}
PUT /banji/xuesheng/2
{
"name": "Xiao Hong"."age": "16"
}
Copy the code
To do this, use the Dev Tools on Kibana to run the following command:
After you have the data, you can simply retrieve it:
This is just a simple introduction to how to use it, and then you can do other things like filter, combine, full text, whatever, and even highlight your search results.
4. Logstash
Logstash is a log collector that supports a very large number of input and output sources. Nginx access logs are used to test the ElasticStack stack. The latest thing about Logstash is that there is a Beats component to collect data. Instead of using a traditional configuration file, we will write the configuration directly to the configuration file.
Logstash uses a Ruby Gem library called FileWatch to listen for file changes. The library supports the glob expansion file path and records a database file called.sincedb to track the current read position of the listened log file.
With Dcoker, the configuration file path is under logstash/config. You can place multiple configuration files in this directory, and eventually the Logstash will help you merge them into one. Let’s define a configuration to read Nginx access logs:
input {
file {
path => [ "/var/log/nginx/access.log"]
type= >"accesslog"
}
}
output {
Output to elasticSearch with index access-log
elasticsearch {
hosts => "localhost:9200"
index => "access-log"
user => "elastic"
password => "changeme"}}Copy the code
Note that the Nginx path written here is relative to the Logstash container, so mount that path into the container. Then restart the Logstash command to view the logs in Kibana. If everything is ok, then you can go to Kibana to log.
docker-compose stop logstash
docker-compose up -d logstash
Copy the code
5. Kibana
Kibana is a visual UI that allows you to retrieve aggregated word segmentation data from Elasticsearch and display statistics with beautiful ICONS. We need to tell Kibana about the Access-log index we just created on the Management page.
At this point, the Nginx system logs are collected and displayed. There are many more uses for Kibana than are written.