While you can use Elasticsearch as a document storage tool and retrieve documents and their metadata, the real power is easy access to the full set of search capabilities built on top of the Apache Lucene search engine library.
Elasticsearch provides a simple, clear REST API for managing your cluster as well as indexing and searching your data. For testing purposes, you can easily submit requests directly from the command line or through the Developer Console in Kibana. In your application, you can use your Elasticsearch client in your preferred language: Java, JavaScript, Go,.NET, PHP, Perl, Python, or Ruby.
Search for your data
The Elasticsearch REST API supports structured queries, full-text queries, and complex queries combining both. Structured queries are similar to the types of queries you can construct in SQL. For example, you can search for the “gender” and “age” fields in the employee index, and then sort the matches by the hire_date field. A full-text query finds all documents that match the string to be queried and returns results sorted by relevance, which means how well the documents match your search term.
In addition to searching for individual terms, you can perform phrase searches, similarity searches, and prefix searches, and get autocomplete suggestions.
Do you want to search geospatial or other digital data? Elasticsearch uses optimized data structures that support high-performance geographic and numeric queries to index non-text data.
You can use all of these search capabilities through Elasticsearch’s comprehensive JSON-style Query DSL. You can also construct SQL-style queries to search and aggregate native Elasticsearch data, and JDBC and ODBC drivers enable a wide range of third-party applications to interact with Elasticsearch through SQL.
Analyze your data
Elasticsearch aggregation enables you to build complex summaries of your data and drill down into key metrics, patterns, and trends. By summarizing, you can not only find the proverbial “needle in a haystack”, but also answer the following questions:
- How many are there in the ocean?
- What is the average length of a needle?
- What is the median length of a needle destroyed by the manufacturer?
- How many needles have been added to the sea in each of the last six months?
You can also use aggregation to answer more nuanced questions, such as:
- Who are your most popular needle manufacturers?
- Are there abnormal or abnormal needles?
Because aggregations use the same data structures used for searches, they are also very fast. This allows you to analyze and visualize data in real time. Your reports and dashboards are updated as the data changes, so you can take action based on the latest information.
Furthermore, the summary runs with the search request. You can search documents, filter results, and perform analysis on the same data simultaneously in a single request. And because the aggregation is calculated in the context of a particular search, you not only show the number of pins of all 70 pins, but also the number of pins of 70 that match the user’s search criteria, such as all 70 stainless steel pins.
More and more
Do you want to analyze your time series data automatically? You can use machine learning capabilities to create accurate benchmarks of normal behavior in the data and identify abnormal patterns. With machine learning, you can detect:
- An exception relating to a time deviation in value, count, or frequency
- Statistical rarity
- Aberrant behavior of members of the population
And the best part? You do not need to specify algorithms, models, or other data science-related configurations to do this.
See the website: www.elastic.co/guide/en/el…
Translation is not allowed to ask for more advice, translation is not easy do not embezzle, such as use, please indicate the source