Step 1 – Install and configure Elasticsearch

By default, Elastic Stack components are not available through package manager, but Yum can install them by adding Elastic’s package repository.

All Elastic Stack packages are signed with Elasticsearch signature keys to protect your system from software package spooks. Packages authenticated with keys are treated as trusted by the package manager. In this step, you import Elasticsearch public GPG key and add Elastic repository to install Elasticsearch.

Download and install the public signing key

# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Copy the code

Install the Rpm Repo

Create elastice.repo under /etc/yum.repos.d/

[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
Copy the code
# yum update
# yum install elasticsearch
Copy the code

After installing Elasticsearch, open its main configuration file elasticSearch.yml in the editor

Elasticsearch’s configuration file is in YAML format, which means indentation is very important! When editing this file, make sure that you do not add any extra Spaces, and that you do not use tag indents, which must have Spaces after:

Elasticsearch listens for any traffic coming from port 9200. So you need to restrict external access to your Elasticsearch instance to prevent outsiders from reading your data through the REST API or shutting down your Elasticsearch cluster. Find the specified line network.host, uncomment it, and replace it with 192.168.1.21 as follows:

#...
network.host: 192.1681.21.
#...

# here 7.0 needs to be opened
cluster.initial_master_nodes: ["node-1"]
#...
Copy the code

Save and close ElasticSearch.yml. Then, run the systemctl command to start the Elasticsearch service and add boot

# systemctl enable elasticsearch
# systemctl start elasticsearch
Copy the code

You can test if your Elasticsearch service is running by sending an HTTP request:

$ curl "192.168.1.21:9200"
Copy the code

You should see a response that displays some basic information about the local node, something like:

{
  "name" : "lq8hbCV"."cluster_name" : "elasticsearch"."cluster_uuid" : "yjl6Pw4NQpmLgm7gN__fvg"."version" : {
    "number" : "6.6.1"."build_flavor" : "default"."build_type" : "rpm"."build_hash" : "1fd8f69"."build_date" : "The 2019-02-13 T17:10:04. 160291 z"."build_snapshot" : false."lucene_version" : "7.6.0"."minimum_wire_compatibility_version" : "5.6.0"."minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
Copy the code

Now that Elasticsearch is up and running, let’s install Kibana, the next component of the Elastic Stack.

The license is unavailable due to the existence of the previous version of FileBeat, so there is a big pit.

Step 2 – Install and configure the Kibana dashboard

You should install Kibana as the next component after Elasticsearch according to the installation order in the official documentation. After setting up Kibana, we can use its interface to search and view Elasticsearch stored data.

Since the Elastic repository was added in the previous step, you can install the rest of the Elastic Stack using yum:

# yum install kibana
Copy the code

configuration

# In order to access in my LAN, here I fill in the address is
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.1.21"

.
# The URLs of the Elasticsearch instances to use for all your queries.
Select * from host where elasticSearch is listening
elasticsearch.hosts: [" http://192.168.1.21:9200 "]
.
Copy the code

Then enable and start the Kibana service:

# systemctl enable kibana
# systemctl start kibana
Copy the code

If server.host is writing to a local area network (LAN), it can be accessed using LAN.

Kibana reverse proxy Settings

If you do not listen to the LAN address, you need to configure the reverse proxy

If Kibana is configured to listen only to localHost, we must set up the reverse proxy to allow external access to it. We will use Nginx for this purpose

First, use the OpenSSL command to create an administrative Kibana user that you will use to access the Kibana Web interface. For example, we will name this account Kibanaadmin, but for greater security, we recommend that you choose a nonstandard name for the user that is hard to guess.

The following command will create the admin Kibana user and password and store them in the htpasswd.users file. You will configure Nginx to ask for this username and password and read this file immediately:

# echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users
Copy the code

Enter and confirm your password at the prompt. Remember or write down this login information as it is needed to access the Kibana Web interface.

Next, we will create an Nginx server block file. For example, we call this file example.com.conf, although you might find it helpful to give you a more descriptive name. For example, if you set FQDN and DNS records for this server, you can name the file after FQDN:

# vi /etc/nginx/conf.d/example.com.conf
Copy the code

Add the following code block to the file to ensure that example.com is updated and matches the server’s FQDN or public IP address. This code configures Nginx to direct HTTP traffic from the server to the Kibana application that is listening. In addition, it configures Nginx to read files and requires basic authentication. www.example.com localhost:5601 htpasswd.users

example.com.conf > /etc/nginx/conf.d/example.com.conf

server {
    listen 80;

    server_name example.com www.example.com;

    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {
        proxy_pass http://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade; }}Copy the code

When you’re done, save and close the file.

Then check the configuration for syntax errors:

# nginx -t
Copy the code

If any errors are reported in the output, go back and double check that what you put in the configuration file was added correctly. Continue and restart the Nginx service:

# systemctl restart nginx
Copy the code

Kibana can now be accessed through your FQDN or the public IP address of your Elastic Stack server. You can check the Status page of the Kibana server by navigating to the following address and entering login credentials when prompted:

http://{your_server_ip}/status
Copy the code

This status page displays information about server resource usage and lists installed plug-ins.

file:///data/backup/mweb/3.0/docs/media/15560096920790/15560900009560.jpg

When starting this place, we must pay attention to the permission of the log. If you encounter any problems, you can see the system log. Unexpected receipt of goods

Step 3 – Install and configure Filebeat

The Elastic Stack uses several lightweight data transmitters called *Beats to collect data from various sources and transfer them to Logstash or Elasticsearch. Here’s what Beats is currently available from Elastic:

  • Filebeat: collects and sends log files.
  • Metricbeat: Collect metrics from your systems and services.
  • Packetbeat: Collect and analyze web data
  • Winlogbeat: Collects Windows event logs
  • Auditbeat: Collects Linux monitoring framework data and monitors file integrity
  • Heartbeat: Monitors the availability of the service through active detection

In this tutorial, we’ll use Filebeat to forward local logs to our Elastic Stack.

Install fileBeat with yum:

# yum install filebeat
Copy the code

Next, configure Filebeat to connect to Elastic Search. Here, we’ll modify the sample configuration file that comes with Filebeat

Open the Filebeat profile

# vi /etc/filebeat/filebeat.yml
Copy the code

Filebeat supports a variety of outputs, but usually only sends events directly to Elasticsearch or Logstash for other processing. In this tutorial, we will use Elasticsearch to perform additional processing on the data collected by Filebeat. Filebeat needs to send data directly to Elasticsearc, so let’s enable that output. To do this, find the output.elasticSearch section and ensure that this section is not commented out with #

/etc/filebeat/filebeat.yml

.
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: [192.168.1.21:9200 ""]
.
Copy the code

Configure FileBeat to support nginx log collection

You can now extend Filebeat’s functionality using the Filebeat module. In this tutorial, you will use the Nginx module, which collects and analyzes access and error logs generated by Nginx.

Let’s activate him

# filebeat modules enable nginx
Copy the code

You can view a list of enabled and disabled modules by running the following command:

sudo filebeat modules list
Copy the code
Enabled:
nginx

Disabled:
apache2
auditd
elasticsearch
haproxy
icinga
iis
kafka
kibana
logstash
mongodb
mysql
osquery
postgresql
redis
suricata
system
traefik
Copy the code

Next we need to initialize the environment to pave the way for log parsing

# filebeat setup -e
Copy the code

By default, Filebeat is configured to use the default path for syslog and authorization logs. In this tutorial, you need to add nginx log paths to the configuration. You can be in the/etc/filebeat/modules. D/nginx. Yml configuration file for the parameters of the module.

# vim /etc/filebeat/modules.d/nginx.yml
Copy the code
- module: nginx
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
        - /webdata/logs/*.access.log

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
        - /webdata/logs/*.error.log
Copy the code

After the configuration, check whether the configuration file is correct

# filebeat test config
Copy the code

Now let’s start FileBeat

# systemctl enable filebeat
# systemctl start filebeat
Copy the code

Here’s how it works

file:///data/backup/mweb/3.0/docs/media/15560096920790/15560924379161.jpg

Refer to the article

  • How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on CentOS 7