0x, Overview!

ELK Stack formerly known as Elastic Stack, ELK Stack is Elastic’s free open-source software portfolio designed for centralized log management. It allows you to search, analyze, and visualize logs from different sources.

To install and configure the ELK Stack on Ubuntu, the following prerequisites are required:

  • Ubuntu 20.04
  • It is best to use Root for configuration
0x1 Content directory
  • ELK Stack component

  • Install Java and all dependencies

  • Install and configure Elasticsearch

  • Install and configure the Logstash

  • Install and configure Kibana

  • Install and configure Nginx

  • Install and configure Filebeat

  • Configure the Linux log to Elasticsearch

  • Create log dashboards in Kibana

  • Monitoring SSH Events

0x2 ELK Stack component

Elasticsearch: Elasticsearch is an open source search engine based on Apache Lucene(TM). It uses RESTful apis to store and retrieve data.

2. Logstash: Logstash is an open source data collection engine that collects data from different sources and sends it to Elasticsearch

Kibana: A Web visualization platform for analyzing and visualizing logs

Filebeat: lightweight log collection and forwarder that forwards data to Logstash or Elasticsearch

0x3 Install Java and all dependencies

You can run the following command to install the OpenJDK and other required software packages for Elasticsearch.

sudo apt install -y openjdk-14-jdk wget apt-transport-https curl
Copy the code

Then import the public key of Elasticsearch and add apt software source

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Copy the code

Adding software Sources

0x4 Installation and ConfigurationElasticsearch

Updating software Sources

sudo apt update
Copy the code

Then install (domestic installation is slow, please wait patiently)

sudo apt-get install elasticsearch
Copy the code

After the installation is complete, configure Elasticsearch

Elasticsearh listens on port 9200 by default. For security purposes, you need to set up some restrictions on Internet access. This prevents external networks from accessing data and elastic clusters through REST apis. The configuration file for Elasticsearch is elasticSearch.yml. Just change it.

Opening a Configuration file

sudo gedit  /etc/elasticsearch/elasticsearch.yml
Copy the code

Find the listening interface and port to modify

Delete the previous comment # and change it to something like this:

Save and start the Elasticsearch service

sudo systemctl start elasticsearch
Copy the code

View the service status and verify that it has been started

sudo systemctl status elasticsearch
Copy the code

curl -X GET localhost:9200
Copy the code

Elasticsearch has started successfully.

You can also check it out in your browser at https://localhost:9200

0x5 Install and Configure Logstash

First make sure you have OpenSSL on your system, then install Logstash

openssl version -a
sudo apt install logstash -y
Copy the code

Create an SSL certificate to ensure the security of Rsyslog and Filebeat transferring data to Logstash.

Create an SSL directory in the Logstash configuration file directory and generate the certificate

sudo mkdir -p /etc/logstash/ssl
cd /etc/logstash
sudo openssl req -subj '/CN=elkmaster/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout ssl/logstash-forwarder.key -out ssl/logstash-forwarder.crt
Copy the code

You can modify the /etc/hosts file to facilitate subsequent configuration. Configure a host name for the host IP address

Conf for receiving data from FileBeat and syslog-filter.conf for filtering system logs. And output-ElasticSearch.conf for exporting data to ElasticSearch.

Create filebeat-input.conf in the logstash configuration directory

cd /etc/logstash/
sudo gedit conf.d/filebeat-input.conf
Copy the code

Add the following:

input {
  beats {
    port => 5443
    type => syslog
    ssl => true
    ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
  }
}
Copy the code

Then create the syslog-filter.conf filter file and use the grok filter. This allows the Logstash to extract data according to the given rules.

sudo gedit conf.d/syslog-filter.conf
Copy the code

Enter the following information:

filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(? :\[%{POSINT:syslog_pid}\])? : %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } }Copy the code

Then create an output-ElasticSearch. conf configuration file to transfer data to ElasticSearch.

sudo gedit conf.d/output-elasticsearch.conf
Copy the code

As follows:

output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
Copy the code

Once the configuration file is ready, start the Logstash service to see if it works.

sudo systemctl start logstash
sudo systemctl status logstash
Copy the code

If no error is reported, the service is started normally.

0x6 Installing and Configuring Kibana

Installing Kibana is also done via APT

sudo apt install kibana
Copy the code

After the installation, let’s set up the Kibana configuration file

sudo gedit /etc/kibana/kibana.yml
Copy the code

Change the listening port and address of elasticSearch

Save and start the Kibana service

You can then access it directly in the browser

0x7 Installing and Configuring Nginx

This installation is mainly for Kibana as a reverse proxy.

Start by installing Nginx and Apache 2-UTlis

sudo apt install nginx apache2-utils -y
Copy the code

After the installation is complete, create the Kibana virtual host profile

sudo gedit /etc/nginx/sites-available/kibana
Copy the code

As follows:

server { listen 80; server_name localhost; auth_basic "Restricted Access"; auth_basic_user_file /etc/nginx/.kibana-user; location / { proxy_pass https://localhost:5601; Proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; }}Copy the code

Create a connection to the configuration file

sudo ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
Copy the code

Then configure a basic authentication for accessing the Kibana Dashboard

sudo htpasswd -c /etc/nginx/.kibana-user elastic
Copy the code

Then test the Nginx configuration file and start the service

sudo nginx -t
sudo systemctl restart nginx
Copy the code

0x8 Installing and Configuring Filebeat

Download FileBeat and install it

Download address: www.elastic.co/cn/download…

You can download it according to your own needs

Here we are installing on Ubuntu, so choose the DEB version to download. You can also install with APT if you’ve added Elastic’s software source before. We can see the official guide to add software source: www.elastic.co/guide/en/be…

sudo apt install filebeat -y
Copy the code

Then edit the fileBeat configuration and the path to the configuration file:

/etc/filebeat/filebeat.yml
Copy the code

First, change the input part to true

Then modify the Elasticsearch Output section

Change to the following configuration :(set according to your actual situation)

Modify the Kibana configuration section:

Save the modification.

Then initialize FileBeat

sudo filebeat setup
Copy the code

Copy the logstash-forwarder. CRT certificate to the /etc/filebeat directory

sudo cp /etc/logstash/ssl/logstash-forwarder.crt /etc/filebeat/
Copy the code

Then start the FileBeat service

sudo systemctl start filebeat
Copy the code
0x9 Configuring Linux Logs to Elasticsearch

Configure rsyslog to Logstash and these logs will be automatically transferred to Elasticsearch

Before configuring logs to Logstash, we first need to configure log forwarding between Logstash and Elasticsearch.

Create a configuration file in /etc/logstash/conf.d to set up log forwarding between Elasticsearch.

cd /etc/logstash/conf.d/
sudo gedit logstash.conf

Copy the code

The content of the configuration file is as follows:

Input {udp {host => "127.0.0.1" port => 10514 codec => "json" type => "rsyslog"}} # The Filter pipeline STAYS EMPTY here, no formatting is done. filter { } # Every single log will be forwarded to ElasticSearch. If you are using another port, you should specify it here. output { if [type] == "rsyslog" { elasticsearch { hosts => [ "localhost:9200" ] } } }Copy the code

The configuration file consists of three parts. The input part defines the source of logs, the filter part indicates the log filter, and the output part indicates the address to which logs are transmitted.

Then let’s restart the Logstash service

sudo systemctl restart logstash

Copy the code

Then configure log forwarding from Rsyslog to Logstash. Rsyslog can use templates to convert logs and then forward them.

To enable rsyslog to forward logs, create a 70-output.conf configuration file in the /etc/rsylog.d directory.

cd /etc/rsyslog.d/
sudo gedit 70-output.conf

Copy the code

Add the following:

*. * @ 127.0.0.1:10514; json-templateCopy the code

This means that all logs are sent to 127.0.0.1:10514 and converted using a JSON-formatted template

We need to create a json template file

sudo gedit 01-json-template.conf

Copy the code

As follows:

template(name="json-template"
  type="list") {
    constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"@version\":\"1")
      constant(value="\",\"message\":\"")     property(name="msg" format="json")
      constant(value="\",\"sysloghost\":\"")  property(name="hostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"programname\":\"") property(name="programname")
      constant(value="\",\"procid\":\"")      property(name="procid")
    constant(value="\"}\n")
}

Copy the code

Then start the Rsyslog service

sudo systemctl start rsyslog

Copy the code

Check whether the Logstash listening port is normal:

ss -na | grep 10514

Copy the code

If the listener fails and the following error message is displayed in the log:

The syntax error exists in the configuration file. ELK has strict requirements on the syntax of the configuration file. Please check it carefully.

0x10 Create a Log Dashboard in Kibana

Open the Kibana interface in your browser

First, you need to create an index schema

Then go to Stack Management- The Index Patterns in Kibana

Then click Create Index Pattern

So logstash-*, and then Next Step

And then the time filter we select at timestamp

Then click Create Index Pattern

After adding successfully, it looks like this:

Click back to Kibana Discover, where you can query and search your data

0x11 Monitoring SSH Events

In the filtering condition, we set the filtering condition as programename: SSHD *

In this way, you can see events related to SSHD programs.

0x12 More Resources

Configure SSL, TLS and HTTPS to ensure the safety of the Elasticsearch, Kibana, Beats and Logstash | Elastic Blog www.elastic.co/cn/blog/con…

How to use Elastic Stack monitoring Nginx Web server | Elastic Blog www.elastic.co/cn/blog/how…