readme

  • Find a lot of search engine DSL can not use so note
  • Problems encountered in the work are also recorded here

Elasticsearch directory

  • Bin -> Stores command scripts such as starting ES
  • Config -> Save ES configuration file, ES will read the content when starting
  • Yml -> ES specifies the cluster information, external port, memory lock, data directory, and cross-domain access
  • Jvm. options-> ES jVM. options-> ES jVM. options-> ES jVM. options-> ES jVM. options-> ES jVM. options-> ES jVM. options-> ES jVM. options-> ES
  • Log4j2.properties -> ES uses Log4j as its logging framework
  • Data -> Directory for storing data (index data)
  • Lib -> ES dependent libraries
  • Logs -> Directory for storing logs
  • Modules -> ES
  • Plugins – > deposit ES extensible plugin directory, such as ik Chinese word segmentation can be plug-in into this directory, ES start automatically loaded

Build ElasticSearch and Kibana with Docker

  1. Docker run –name kibana -p 5601:5601-d-e ELASTICSEARCH_URL=http://localhost:9200 kibana:6.7.1
  2. Docker runit –name elasticSearch -d -p 9200:9200 -p 9300:9300 -p 5601:5601 ElasticSearch :6.7.1 docker runit –name elasticSearch -d -p 9200:9200 -p 9300:9300 -p 5601:5601
  3. Docker run it-d -e ELASTICSEARCH_URL=http://127.0.0.1:9200 –name kibana –network= Container: ElasticSearch Kibana: 6.7.1
  4. ps -ef | grep ‘.node/bin/node.src/cli’
  5. kill -9 PID

Kibana use

  • Discover finds out more about searching and filtering data.
  • Visualize review all the visual control types available with Kibana.
  • Management See how to configure Kibana and manage saved objects.
  • Console See how to submit REST requests to Elasticsearch using the interactive Console.

ElasticSearch core concepts

  1. A cluster consists of one or more nodes. A cluster has a default name of “Elasticsearch”.
  2. node
  3. Master node: stores metadata.
  4. Data node: stores Data.
  5. Ingest node: You can configure the PIpline interceptor to ETL data before it actually enters the index.
  6. Coordinate node: Coordinate node. If a search request is received and forwarded to a data node, each data node executes the request locally and returns the results to the coordinating node. The coordination node summarizes the results of each data node and returns them to the client. Each node is a coordination node by default. When node.master, Node. data, and Node.ingest are set to false, this node is used only as a coordination node.

Note: Coordinate Tribe is a special type of coordination node that connects to multiple clusters and performs search and other operations in all connected clusters.

  1. An index can be understood as a relational database.
  2. A type is like a table. Such as user table, recharge table, etc.
  3. Note:
  4. In ES 5.x, an index can have multiple types.
  5. ES 6.x an index can have only one type.
  6. After ES 7.x, the concept of type will be removed.
  7. Mapping Defines the type of each field and the tokenizer used by the field. Equivalent to a table structure in a relational database.
  8. A document is equivalent to a row in a relational database.
  9. A field in a document is equivalent to a column in a relational database.
  10. Shard and replica
  11. A copy is a copy of a shard. A Shard can be a primary Shard or a replica Shard. An Index data is physically distributed among multiple master shards, each of which stores only part of the data. Each master shard can have multiple copies. This is called a replica shard, which is a copy of the master shard.
  12. Note:
  13. A document only exists on a primary shard and its corresponding Replica shard, not on multiple primary shards.
  14. By default, an index has five primary shards, each of which has a copy. Thus, the entire index has 10 shards, and at least two nodes are needed to keep the whole cluster healthy. Master shard and replica shard cannot be on the same machine.
  15. The number of master shards cannot be changed after the index is created, and the number of replica shards can be changed.
  16. Each shard is a complete Lucene instance with full index creation and request processing capabilities.
  17. Sharding helps es scale horizontally. Replicas are generally fault-tolerant, equivalent to the HA of the master shard. In addition, sharding and replicas help to improve the performance of parallel searches because searches can be performed in parallel across all replicas.
  18. A shard contains multiple segments, each of which is an inverted index.
  19. When querying a segment, each shard returns a summary of all segment results as the shard’s results.
  20. The segment to generate
  21. When ES is written, on the one hand, data is written to the Buffer Buffer. To prevent data loss in the Buffer, on the other hand, data is written to the Translog log file.
  22. Every 1 second, data is written from the Buffer to the Segment file, directly to the OS cache.
  23. The OS cache segment file is opened for search, and then the memory Buffer is cleared.
  24. Over time, the number of segment files in the OS cache increases and the translog file becomes larger. When the translog file becomes large enough, flush is triggered.
  25. Flush persists the segment file in the OS cache to disk and deletes the translog log file.
  26. When the number of segments increases to a certain extent, ES will be triggered to merge many small segments into large segments and delete small segments to improve query performance.

The problem

Create index in Kibana about the selection of time issues. Do not use the long type, because es will not be able to recognize its type

  1. The first way
PUT localhost:9200/{date-test// here is the index name}
{
"mappings": {
"test": {// this is type
"properties": {
"requestTime": {// Specify the time field
"type":   "date"."format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"// Specify the format
},
"timestamp": {
"type":   "date"."format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
}
}
}
}
}
localhost:9200/date_test/test/_mapping
Copy the code
  1. Creating a Vm from a Template

//todo

  1. Delete documents based on conditions
http://172.25.35.230:9200/mobile-requestpagelog/_delete_by_query
body:
{
  "query": {
    "bool": {
      "must": [{"range": {
            "timestamp": {
              "lte": 1559318400000}}, {"term": {
            "eventId": {
              "value": "h5_request"
            }
          }
        }
      ]
    }
  }
}

result:

Copy the code