In today’s article, I want to show you how to install Elasticsearch on Linux and MacOS. Installing Elasticsearch is pretty straightforward. In today’s article, you’ll see how to install Elasticsearch directly from a compiled file (.tar.gz). If you want to get an overview of Elasticsearch, please see my article “Introduction to Elasticsearch”.

This package is free to use under an Elastic license. It includes open source and free business features as well as paid business features. Start a 30-day trial with all paid commercial features. For information about flexible license levels, see the Subscription page.

The latest stable version of Elasticsearch can be found on the Download Elasticsearch page. Other versions can be found on the Historical Versions page.

Note: Elasticsearch includes a bundled version of OpenJDK from the JDK maintainer (GPLv2 + CE). To use your own version of Java, see JVM Version Requirements. If you want to learn how to install JAVA on Ubuntu/Linux, please see my article “How to Install JAVA on Ubuntu”. The Java version cannot be lower than 1.7_55. Starting with Elastic 7.0, we can do without installing JAVA. The installation package contains a matching JAVA version.

 

Download and install the Linux archive

 

In the following installation, 7.3.0 is used as an example. In a real installation, you could replace 7.3.0 on the command line with the latest release number, such as 7.5.1. If you want to download from the website directly version you want, you can directly at www.elastic.co/downloads/p. Download.

You can download and install the Linux archive file of Elasticsearch V7.3.0 as follows:

$$wget wget HTTP: / / https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.0-linux-x86_64.tar.gz https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.0-linux-x86_64.tar.gz.sha512 $shasum - a 512 - c Elasticsearch-7.3.0-linux-x86_64.tar.gz. sha512 $tar -xzf ElasticSearch-7.3.0-linux-x86_64.tar. gz $CD Elasticsearch 7.3.0 /Copy the code

On the third line above, compare the SHA of the downloaded.tar.gz archive with the published checksum, which should print ElasticSearch – {version} -linux-x86_64.tar.gz: OK. This is to verify that the downloaded file is correct.

The last line above is in a directory that represents $ES_HOME. We use it in the following statement to express our installation directory.

Or, you can download the following software packages, which contains only the Apache 2.0 license code: artifacts. Elastic. Co/downloads/e…

Virtual memory

Elasticsearch uses the MMAPFS directory to store its index by default. The default operating system may limit the Mmap count too low, which can result in out-of-memory exceptions.

On Linux, you can increase the limit by running the following command as root:

sysctl -w vm.max_map_count=262144
Copy the code

To set this value permanently, update the vm.max_map_count setting in /etc/sysctl.conf. To verify after a reboot, run

sysctl vm.max_map_count
Copy the code

RPM and Debian packages will automatically configure this setting. No further configuration is required.

You can also install it directly using DEB and RPM packages.

Note: Since Elasticsearch iterates quickly, you can download the latest version and install it. The latest version can be found in the address: www.elastic.co/downloads/e…

 

Download and install the MacOS archive

You can download and install the MacOS archive of Elasticsearch V7.3.0 as follows:

$$wget wget HTTP: / / https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.0-darwin-x86_64.tar.gz https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.0-darwin-x86_64.tar.gz.sha512 $shasum - a 512 - c Elasticsearch - 7.3.0-Darwin-x86_64.tar.gz. sha512 $tar - XZF ElasticSearch - 7.3.0-Darwin-x86_64.tar. gz $CD Elasticsearch 7.3.0 /Copy the code

On the third line above, compare the SHA of the downloaded.tar.gz archive with the published checksum, which should print elasticSearch – {version} -linux-x86_64.tar.gz: OK. This is to verify that the downloaded file is correct.

The last line above is in a directory that represents $ES_HOME. We use it in the following statement to express our installation directory.

Or, you can download the following software packages, which contains only the Apache 2.0 license code: artifacts. Elastic. Co/downloads/e…

 

Download and install the Windows.zip file

Download the.zip archive for Elasticsearch V7.3.1 from:

Artifacts. Elastic. Co/downloads/e…

Alternatively, you can download the following package, which contains only the functionality provided under the Apache 2.0 license:

Decompress it with your favorite decompression tool. This will create a folder called ElasticSearch-7.3.1, which we’ll call % ES_HOME %. In a terminal window, CD to the % ES_HOME % directory, for example:

CD c: \ elasticsearch - 7.3.1Copy the code

On Windows, open a Command Prompt with administrator privileges to start Elasticsearch:

bin\elasticsearch.bat
Copy the code

 

Run Elasticsearch from the command line

You can start Elasticsearch from the command line as follows:

./bin/elasticsearch
Copy the code

By default, Elasticsearch runs in the foreground, prints its logs to STDOUT, and can be stopped by pressing Ctrl-C.

There are two important configuration options

  • Data: /data/ elasticSearch. Yml: path.data: /data/ elasticSearch
  • Jvm. options: -xms512m, set the memory size of the JVM

We can configure these two options on the command line as follows:

$ ./bin/elasticsearch -E path.data=/data/elasticsearch
Copy the code

Or:

$ ES_JAVA_OPTS="-Xms512m" ./bin/elasticsearch
Copy the code

or

$ ES_JAVA_OPTS="-Xms512m -Xmx512m" ./bin/elasticsearch
Copy the code

We can also override the default node name of ElasticSearch by using the following method:

$ ./bin/elasticsearch -E node.name=mynodename
Copy the code

This is very useful for starting two or more different nodes to do replica deployment tests.

 

Enable automatic creation of system indexes

Some commercial features automatically create system indexes in Elasticsearch. By default, Elasticsearch is configured to allow automatic index creation, no additional steps are required. However, if automatic index creation is disabled in Elasticsearch, you must configure action.auto_create_index in elasticSearch. yml to allow commercial functions to create the following indexes:

action.auto_create_index: .monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*
Copy the code

Important note:

If you are using Logstash or Beats, you will most likely need additional index names in the action.auto_create_index setting, depending on your local configuration. If you are unsure of the correct value for your environment, consider setting the value to *, which will allow all indexes to be created automatically.

 

Docker installation

If you want to install using Docker, please refer to my documentation “Elastic: Deploying Elastic Stacks with Docker”. Docker-compose allows you to install multiple applications in an Elastic stack at once.

For Docker installation, you can refer to my other article “Elasticsearch: Install Elasticsearch from scratch and use Python to load a CSV and read and write it” to deploy your Elasticsearch and Kibana.

 

Run Elasticsearch from the command line

You can start Elasticsearch from the command line as follows:

./bin/elasticsearch
Copy the code

By default, Elasticsearch runs in the foreground, prints its logs to standard output (STDout) and can be stopped by pressing Ctrl-C.

Note: All scripts packaged with Elasticsearch require a version of Bash that supports arrays, assuming Bash is available in /bin/bash. Therefore, Bash should be available on this path either directly or via a symbolic link.

 

Check whether Elasticsearch is running

You can test whether your Elasticsearch node is running by sending an HTTP request to port 9200 on localhost:

GET /
Copy the code

Log printing to STdout can be disabled using the -q or –quiet option on the command line.

Select * from Elasticsearch; select * from Elasticsearch;

curl 'http://localhost:9200/? pretty'Copy the code

As you can see from above, by default we have created a cluster named “ElasticSearch”.

If you have created a secure access for your cluster, you can run the curl command as follows:

curl -XGET "http://elastic:password@localhost:9200/"
Copy the code

Or:

curl -u elastic:password -XGET "http://localhost:9200/"
Copy the code

Here, elastic and password are the user name and password for accessing Elasticsearch. For details on how to configure security, read my article setting Up Elastic Account Security.

For developers who are familiar with Postman, you can easily use Postman with Elasticsearch. It is a good debugging tool:

We can use the Postman tool without Kibana. If you’re interested in this, you can read my post “Elastic: Using Postman to Access Elastic Stack.”

 

Run as a daemon

To run Elasticsearch as a daemon, specify -d on the command line and record the process ID in a file using the -p option:

./bin/elasticsearch -d -p pid
Copy the code

The command above stores the running process in a file called PID to facilitate termination below. Log messages can be found in the $ES_HOME/logs/ directory.

To disable Elasticsearch, terminate the process ID recorded in the PID file:

$ pkill -F pid
Copy the code

Or use the following command to stop it:

$ kill `cat pid`
Copy the code

You can also use the Java Virtual Machine Process Status Tool (JPS) to view Elasticsearch.

jps | grep Elasticsearch
Copy the code

The process ID of Elasticsearch currently running will be displayed:

To stop Elasticsearch, run the following command:

kill -9 6253
Copy the code

The startup scripts provided with RPM and Debian packages are responsible for starting and stopping the Elasticsearch process for you.

Check the log file to make sure the procedure is closed. You’ll see that the text Native Controller process has stopped, stopped, closed, and closed near the end of the file:

$PWD/Users/liuxg/elastic/elasticsearch - 7.3.0 / logs (base) liuxg: logs liuxg $ls *. The log elasticsearch. The log elasticsearch_deprecation.log elasticsearch_index_indexing_slowlog.log elasticsearch_index_search_slowlog.log gc.logCopy the code

On top, I can see a file called elasticSearch.log. To view Elasticsearch logs (run in the Elasticsearch installation directory), run the following command:

tail logs/elasticsearch.log
Copy the code

 

Configure Elasticsearch on the command line

By default, Elasticsearch from $ES_HOME/config/Elasticsearch yml file to load the configuration. The format of this configuration file is described in Configuring Elasticsearch.

You can also specify any Settings that can be specified in the configuration file using the -e syntax on the command line, as follows:

./bin/elasticsearch -d -Ecluster.name=my_cluster -Enode.name=node_1
Copy the code

This is especially true for installing multiple instances of Elasticsearch so that we can easily build replicas.

We can also set the http.host value as follows:

./bin/elasticsearch -d -Ecluster.name=my_cluster -Enode.name=node_1 -E http.host="localhost","mac"
Copy the code

The above command can also be expressed as follows:

./bin/elasticsearch -d -E cluster.name=my_cluster -E node.name=node_1 -E http.host="localhost","mac"
Copy the code

Notice the extra space between -e and the argument.

Note that both MAC and localhost above are pinging addresses.

You can specify its address in the /etc/hosts file on your own computer. So our Elasticsearch will be accessed by http://localhost:9200 and http://mac:9200.

Tip: In general, any cluster-scoped setting (such as cluster.name) should be added to the ElasticSearch.yml configuration file, and any node-specific setting (such as Node.name) can be specified on the command line.

 

Installation file directory layout

Archive distribution is completely independent. By default, all files and directories are contained in $ES_HOME – the directory created when unpacking the archive.

This is handy because you don’t have to create any directories to start using Elasticsearch, and uninstalling Elasticsearch is as easy as deleting the $ES_HOME directory. However, it is recommended to change the default locations of the config, data, and logs directories so that important data is not deleted later.

type describe The default location Set up the
home Elasticsearch home directory or $ES_HOME The directory created by unpacking the archive  
bin The binary scripts include elasticSearch for starting the node and ElasticSearch-plugin for installing the plug-in $ES_HOME/bin  
conf The configuration file includes elasticSearch.ym $ES_HOME/config ES_PATH_CONF
data The location of each index/sharded data file allocated on a node. Can accommodate multiple locations. $ES_HOME/data path.data
logs Log file location $ES_HOME/logs path.logs
plugins Plug-in file location. Each plug-in will be contained in a subdirectory. $ES_HOME/plugins  
repo Share the file system repository location. Can accommodate multiple locations. File system repositories can be placed in any subdirectory of any directory specified here. No configuration path.repo
script Location of the script file $ES_HOME/scripts path.scripts

 

Set up a secure Account

If we’re deploying, we don’t want our deployment to be available to everyone. We only want access to users who have accounts, so we can make our Elastic secure. This requires x-Pack-related functionality. See my other article “Elasticsearch: Setting Elastic Account Security” for details on how to install Elasticsearch.

 

The next step

You have now set up the test Elasticsearch environment. Before you can start serious development or go into production with Elasticsearch, you must make a few additional Settings:

  • Learn how to configure Elasticsearch.
  • Configure important Elasticsearch settings.
  • Configure important system settings.

We can install Kibana next. Kibana has a Web interface. It helps us present and analyze our data. It also makes it easy to enter your data into the Elasticsearch database via the user interface. See article:

  • How to install Kibana in the Elastic stack on Linux, MacOS and Windows
  • Start using Elasticsearch (1)
  • Elastic: How can I emulate multiple nodes on a single machine

If you want to deploy an Elastic cluster on the cloud, you can read my two articles below:

  1. Elastic: : Deploy an Elastic cluster on the Elastic cloud in 3 minutes
  2. Elastic: How to build Elastic cluster on Ali Cloud

Elastic Stack Introduction and Installation

 

Reference:

【 1 】 www.elastic.co/downloads/p…