In my previous tutorial, “Elastic: Deploying an Elastic Stack with Docker,” I explained in detail how to deploy our Elastic Stack using Docker. Careful readers may have noticed that there is no security in it. So how do we secure our Elastic Stack for Docker deployments?
In today’s tutorial, we’ll take a few steps. I will describe in detail how to create a secure configuration for Docker.
Create a docker – compose. Yml
In the previous tutorial, the docker-comemage. yml file used there was not configured for security. We need to modify:
docker-compose.yml
Version: '3.0' Services: ES01: image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_STACK_VERSION} container_name: es01 environment: - node.name=es01 - discovery.seed_hosts=es02 - cluster.initial_master_nodes=es01,es02 - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - xpack.security.enabled=true - xpack.security.transport.ssl.enabled=true - xpack.security.transport.ssl.keystore.type=PKCS12 - xpack.security.transport.ssl.verification_mode=certificate - xpack.security.transport.ssl.keystore.path=elastic-stack-ca.p12 - xpack.security.transport.ssl.truststore.path=elastic-stack-ca.p12 - xpack.security.transport.ssl.truststore.type=PKCS12 ulimits: memlock: soft: -1 hard: -1 volumes: - ./elastic-stack-ca.p12:/usr/share/elasticsearch/config/elastic-stack-ca.p12 - esdata01:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - esnet es02: image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_STACK_VERSION} container_name: es02 environment: - node.name=es02 - xpack.security.enabled=true - discovery.seed_hosts=es01 - cluster.initial_master_nodes=es01,es02 - cluster.name=docker-cluster - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - esdata02:/usr/share/elasticsearch/data networks: - esnet kibana: image: docker.elastic.co/kibana/kibana:${ELASTIC_STACK_VERSION} container_name: kibana ports: ['5601:5601'] networks: ['esnet'] environment: - SERVER_NAME=kibana.localhost - ELASTICSEARCH_HOSTS=http://es01:9200 - I18N_LOCALE=zh-CN - ELASTICSEARCH_USERNAME=elastic - ELASTICSEARCH_PASSWORD="123456" depends_on: ['es01'] volumes: esdata01: driver: local esdata02: driver: local networks: esnet:Copy the code
To make our docker-comemage. yml work for more versions, I added a ELASTIC_STACK_VERSION variable to it. We need to create a.env file in the same directory as the docker-comemage. yml file, and its contents are:
ELASTIC_STACK_VERSION = 7.6.2Copy the code
We can choose which version we want. Another important change is that in ES01, I added the following section:
- xpack.security.enabled=true
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.keystore.type=PKCS12
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.security.transport.ssl.keystore.path=elastic-stack-ca.p12
- xpack.security.transport.ssl.truststore.path=elastic-stack-ca.p12
- xpack.security.transport.ssl.truststore.type=PKCS12
Copy the code
According to the requirements of the Elasticsearch, if we start in the environment of the docker xpack. Security enabled, we must also start the xpack. Security. Transport.. SSL enabled. Otherwise, we will see the following error message:
[1] : Transport SSL must be enabled if security is enabled on a [basic] license. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]
Because the certificate is required, I also added the following part:
volumes:
- ./elastic-stack-ca.p12:/usr/share/elasticsearch/config/elastic-stack-ca.p12
Copy the code
Above, we mount the elastice-stack-ca. p12 file in the docker-comemage. yml directory to the docker directory. This certificate, we will generate in the following section.
For the Kibana part, I modified the following part:
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD="123456"
Copy the code
Here, we’re using an account called elastic/123456. This account needs to be generated in the following steps.
Create a certificate
As shown above, we need to generate a certificate for TLS to configure. So how do we produce this certificate? Let’s start with an instance of Elasticsearch. This example could be created by creating Elasticsearch as I described in my previous Elastic: Beginner’s Guide.
In today’s tutorial we will use Docker to create an instance of Elasticsearch:
docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" Docker. Elastic. Co/elasticsearch/elasticsearch: 7.6.2Copy the code
We use the following command:
docker ps
Copy the code
We can see a running instance of Elasticsearch. Notice that the name of the container above is vigorous_goldstine.
We can enter the docker container with the following command:
docker exec -it vigorous_goldstine bash
Copy the code
$ docker exec -it vigorous_goldstine bash
[root@2b7a12d54c14 elasticsearch]# ls
LICENSE.txt README.asciidoc config jdk logs plugins
NOTICE.txt bin data lib modules
Copy the code
We then use the following command:
./bin/elasticsearch-certutil ca
Copy the code
We accept the default filename elastice-stack-ca.p12:
ls
LICENSE.txt README.asciidoc config elastic-stack-ca.p12 lib modules
NOTICE.txt bin data jdk logs plugins
Copy the code
This creates a certificate file called elastice-stack-ca.p12. We then enter the exit command to exit the Container environment.
Then we go to the directory where the docker-comemage. yml file resides:
$ pwd
/Users/liuxg/elastic/docker
liuxg:docker liuxg$ ls
docker-compose.yml
Copy the code
We type the following command:
docker cp vigorous_goldstine:/usr/share/elasticsearch/elastic-stack-ca.p12 .
Copy the code
After this copy, we can see all the files in the current directory:
$ pwd
/Users/liuxg/elastic/docker
liuxg:docker liuxg$ ls
docker-compose.yml elastic-stack-ca.p12
Copy the code
We have finished using the docker, we can destroy the docker:
$ docker stop vigorous_goldstine
$ docker rm vigorous_goldstine
Copy the code
Now that we’re ready, let’s go ahead and generate the username and password for our Elasticsearch.
Production safety account
If we launch Elasticsearch and Kibana all at once, we don’t have any accounts. We can launch Elasticsearch and Kibana in parts.
Start the Elasticsearch
In our termninal, go to the directory where docker-comemage.yml is located:
docker-compose up -d es01
Copy the code
The command output is as follows:
$ docker-compose up -d es01
Starting es01 ... done
Copy the code
The container can be checked by using the following command:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0f764e16c163 Docker. Elastic. Co/elasticsearch/elasticsearch: 7.6.2 "/ usr/local/bin/dock..." 5 minutes ago Up About a minute 0.0.0.0:9200->9200/ TCP, 9300/ TCP ES01Copy the code
The Elasticsearch binding has an address of 0.0.0.0, which means it is bound to all network interfaces. It can be accessed by both localhost:9200 and its own private address, PrivateIP:9200. Let’s open a browser and type localhost:9200:
As we can see, when we try to access it, it asks us to enter a username and password. And this username and password, we haven’t created yet.
Let’s login to our Elasticsearch container:
docker-compose exec es01 bash
Copy the code
$ docker-compose exec es01 bash
[root@496d9f848178 elasticsearch]# ls
LICENSE.txt README.asciidoc config jdk logs plugins
NOTICE.txt bin data lib modules
Copy the code
We type the following command in this directory:
./bin/elasticsearch-setup-passwords interactive
Copy the code
Then, we follow the instructions and set the password to 123456.
When we’re done, we exit. Again localhost:9200 elastic/123456:
Now our Elasticsearch security is set.
Start the Kibana
In the same way, we use the following command:
$ docker-compose up -d kibana
Copy the code
$ docker-compose up -d kibana es01 is up-to-date Creating kibana ... done liuxg:docker liuxg$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 17c2273346bd Docker. Elastic. Co/kibana/kibana: 7.6.2 "/ usr/local/bin/dumb..." 18 seconds ago Up 18 seconds 0.0.0.0:5601->5601/ TCP kibana 496d9f848178 Docker. Elastic. Co/elasticsearch/elasticsearch: 7.6.2 "/ usr/local/bin/dock..." 2 hours ago Up 2 hours 0.0.0.0:9200->9200/ TCP, 9300/ TCP ES01Copy the code
We can monitor the Kibana run log by using the following command:
$ docker-compose logs -f kibana
Copy the code
$ docker-compose logs -f kibana Attaching to kibana kibana | {"type":"log","@timestamp":"2020-04-23T11:19:36Z","tags":["info","plugins-service"],"pid":6,"message":"Plugin \"case\" is disabled."} kibana | {"type":"log","@timestamp":"2020-04-23T11:19:51Z","tags":["warning","config","deprecation"],"pid":6,"message":"Setting [elasticsearch.username] to \"elastic\" is deprecated. You should use the \"kibana\" user instead."} kibana | {"type":"log","@timestamp":"2020-04-23T11:19:51Z","tags":["warning","config","deprecation"],"pid":6,"message":"Setting [xpack.monitoring.elasticsearch.username] to \"elastic\" is deprecated. You should use the \"kibana\" user instead."} kibana | {"type":"log","@timestamp":"2020-04-23T11:19:51Z","tags":["info","plugins-system"],"pid":6,"message":"Setting up [37] plugins: [taskManager,siem,licensing,infra,encryptedSavedObjects,code,usageCollection,metrics,canvas,timelion,features,security,a pm_oss,translations,reporting,uiActions,data,navigation,status_page,share,newsfeed,inspector,expressions,visualizations, embeddable,advancedUiActions,dashboard_embeddable_container,kibana_legacy,management,dev_tools,home,spaces,cloud,apm,gra ph,eui_utils,bfetch]"} ...Copy the code
Wait a moment and type the address localhost:5601 in our browser.
On the launch screen, it prompts us for a username and password. We have the account information elastic/123456. So you can log in:
This completes our configuration. If you want to turn off the entire Elastic Stack, you can use the following command:
docker-compose down
Copy the code
$ docker-compose down
Stopping kibana ... done
Stopping es01 ... done
Removing kibana ... done
Removing es01 ... done
Removing network docker_esnet
Copy the code
If you want to use your cluster again, you just need to type the following command:
docker-compose up
Copy the code
$ docker-compose up Creating network "docker_esnet" with the default driver Creating es01 ... done Creating es02 ... done Creating kibana ... done Attaching to es02, es01, kibana es01 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. Es02 | its 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. Es01 | {" type ": "Server ", "timestamp": "2020-04-23T11:27:10,816Z", "level": "INFO"," Component ": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "es01", "message": "using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [40gb], Net total_space [ext4], types [ext4]"}...Copy the code
Then the entire Elastic Stack (Elasticsearc and Kibana) will start with one click.
Beats access Elasticsearch
After deploying the Elasticsearch cluster, we may need to import data into Elasticsearch at a later date. We have been using the localhost:9200 address above. But this is for local machines. If your Beats is on a different machine (on the same LAN), you’ll need to use the PrivateIP:9200 address to write to it. The private IP can be obtained by running the following command:
ifconfig | grep 192
Copy the code
In my case:
The ifconfig | grep 192 inet 192.168.0.100 netmask 0 xffffff00 broadcast 192.168.0.255Copy the code
192.168.0.100:9200 = 192.168.0.100:9200
Okay, that’s it for today. In today’s article, we went into detail on how to configure security for the Elasticsearch cluster created by Docker.
further
If you want to deploy a more complete Elastic Stack using Docker, you can see the official Elastic repository at github.com/elastic/sta… .