Elasticsearch Elastic Cloud Elasticsearch Service Allows you to quickly start Elasticsearch and Kibana clusters and start sending logs immediately! First, you need to log into the Elastic Cloud Elasticsearch Service. Click here to open the home page. Then, click “No account? Register now “link. Finally, enter your email in the “Enterprise Email Address” field and click “Start Free Trial” :

 

After entering your email address and clicking “Start Free Trial,” you should find an email with a verification link in your inbox.

Choice (safe!) After password, you will be directed to the login page:

For this experiment, you will create clusters in AWS, but you can choose to use AWS in your deployment. Select the area closest to your current location. Don’t forget to give it a meaningful name (in the table at the top); for this experiment, it will be called logs_dev.

Since Tokyo is close to us, we choose Asia Pacific (Tokyo) as our region.

From the choices above, we can choose the latest version 7.4.2. Of course, we can also choose other versions. We also chose I/O Optimized as the default architecture. This is because the type of deployment is not important for our current requirements. Please read the detailed description of each one as to which one we choose. Go ahead and click “Create Deployment” at the bottom of the page.

So here we are:

 

As you can see above, it takes about 3 minutes to create one of our clusters on Elastic cloud. The created user name and password are displayed. Please write it down. This username and password will be used in Beat uploading data and logging in to Kibana.

In the lower part of the page, we can see that this is a cluster of five servers:

It shows two data nodes, a master node, a Kibana server, and an APM server.

Above we can see that we have created an Elastic cluster on Tokyo’s AWS cloud. You can copy Elasticsearch, Kibana and APM urls. We can use the Copy Endpoint URL in Kibana above to Launch our Kibana in the browser, or directly click the Launch link in Kibana. Remember to log in using our saved username and password:

The screen after entering is as follows:

If you forget your username and password, you can click on the Activity to view it:

You can also reset your password by clicking Security:

The Cloud ID shown above, along with the user name and password, can help us import Beats or Logstash data into our cluster. For example, in Filebeat’s filebeat.yml:

In addition to writing your username and password in filebeat.yml, you can also use the following methods.

1) Enter:

./filebeat keystore create
Copy the code

2) Then type the following command:

./filebeat keystore add CLOUD_PWD
Copy the code

In this step we copy and paste the password created in the Cloud.

3) Modify our filebeat.yml file:

This way, we don’t have to enter our own password in our filebeat.yml file. We use ${CLOUD_PWD} instead.

4) Start fileBeat:

./filebeat -e
Copy the code

So we’re all configured. We can see the data we collected in the cloud.

 

Deploy the Elastic Stack on the Elastic cloud