Writing in the front
Note: This article is based on version 7.12 of Elasticsearch
This is important. Decide which version you want to use and suggest the latest version
The Elasticsearch memory is divided into two parts
System memory is also called out-of-heap memory and JVM memory is also called heap memory
The system memory is mainly used by the underlying Lucene. JVM memory is provided for use by various caches during ES queries.
This article focuses on heap memory. What exactly is ES doing to my heap memory in the query? !!!!!
One, ES and SQL concept mapping
Mapping Concepts across SQL and Elasticsearchedit 网 址
SQL | ES | SQL/ES |
---|---|---|
column | field | Columns/attributes |
row | document | Line/document |
table | index | Table/index |
schema | implicit | Namespace /(ES not yet corresponding) |
catalog or database | cluster instance | Database/ES run instance |
cluster | cluster federated | Multiple databases /ES clusters |
As you can see, ES and SQL are not exactly one-to-one, and their semantics are different, but they have more in common. Thanks to the natural design of SQL, many concepts can be moved directly and transparently to ES.
ES heap memory (JVM memory area)
1. Node Query cache default 10% heap size
Node Query Cache Official document address
It might be better called Filter Query cache.
Term queries and queries used outside of a filter context are not eligible for caching.
Term queries and queries outside the filter context do not fit the cache criteria
Filter query cache for each node. This cache uses an LRU cache weeding strategy where, when the cache is full, the least recently used query results are expelled to make room for new data. You cannot inspect the contents of the query cache.
By default, this cache will hold a maximum of 10,000 queries, up to 10% of the total heap space
If a segment contains at least 10000 documents, and the segment has at least 3% of the total number of fragmented documents, the cache is carried out according to each segment, since the cache is divided by segments. Therefore, merging segments invalidates cached queries
This setting is static and must be set on each node
. Indices. The queries. The cache size is 10% by default
Indices. The queries. Cache. Size: 10% / / according to the percentage of configuration indices. The queries. Cache. Size: 512 MB / / fixed size specified
⚠️ Heap size Used: 10% free: 90%… ing
Indexing buffer default 10% heap size
Indexing Buffer Official document address
Indexing Buffer is used to store the latest indexed documents, when Indexing Buffer is full, cached documents will be written to the segment on disk.
This setting is static and must be configured on each node in the cluster:
Index.memory. index_BUFFer_size Indexing Buffer size
Index.memory. index_BUFFer_SIZE: 10% // Set it by heap percentage. Index.memory. index_BUFFer_SIZE: 512MB // Specify a fixed size
If the indics.memory. index_BUFFer_SIZE is specified as a percentage, the following configuration takes effect:
Min_index_buffer_size specifies the absolute minimum value of the index buffer. Default value is 48mb. Indices.memory. max_index_buffer_size specifies the index buffer The absolute maximum is unbounded by default
⚠️ Heap size used: 20% free: 80%…… ing
3. Shard Request Cache Fragments request cache default 1% heap size
Shard Request Cache official document address
Aggregations Query Cache might be more appropriate.
The fragmented request cache is used to cache the request results.
By default, this request cache only caches the query request size = 0, so it does not cache hits, but it will cache hits. Total, aggregations, and suggestions.
This cache is intelligent and promises to maintain near real-time equivalence with uncached queries. Speaker: This cache query is guaranteed to be in real time
This cache is automatically invalidated every time the fragment is refreshed, the document changes, or the mapping is updated.
If the cache is full, the least recently used cache key is eliminated
The cache is managed at each node level and its default maximum size is 1% of the heap which can be set in config/ Elasticsearch.yml:
Indices. Requests. Cache. Size: specify the query cache maximum heap of 2% 2%
Monitoring the cache:
Index Statistics:
GET /_stats/request_cache? humanCopy the code
Node statistics:
GET /_nodes/stats/indices/request_cache? humanCopy the code
⚠️ Heap size Used: 21% free: 79%… ing
4. Field data cache default 0%~unlimited heap size
Field Data Cache official document address
Doc Values and Fielddata
This cache is equivalent to the front row index cache. By default, doc_value is used in the latest version. When you cancel doc_value or specify field data, the positive index of the query field is loaded into memory.
Adding to the cache is an expensive operation, so by default the cache is kept in memory. The default cache size is unlimited, and the field Data cache will grow until the breaker is triggered
The cache size can be configured in the configuration file:
Indices. Fielddata. Cache. Size: 38% / 38% the size of the heap node indices. Fielddata. Cache. Size: 12 gb / / absolute size 12 gbCopy the code
If set, it should be smaller than the size of the field Data circuit breaker.
⚠️ Circuit breakers are described later
⚠️ Heap size Used: 21% free: 79%… ing
5. Indexing pressure default 10% heap size
Indexing Pressure Official document address
Indexing documents to Elasticsearch consumes memory and CPU.
External index operations go through three stages: coordinating, primary, and Replica. Coordination, primary index and replica. [Basic write Model]
Memory limit: The default is 10% heap memory
indexing_pressure.memory.limit: 10%
Copy the code
⚠️ Heap size Used: 31% free: 69%… ing
6 . Segment Memory unlimited heap size
Segment memory is a cache of Luence segments used by ES to speed up the query of inverted indexes.
Here are the most quoted explanations on the Internet:
Segment is not a file? What is segment memory? A segment is a complete lucene inversion index, which is a quick query based on the mapping between Term Dictionary and Postings List. Because the size of the dictionary is very large, it is impractical to load it all into the heap, so Lucene creates a Term Index for the dictionary. The data structure used for this index after Lucene 4.0 was Finite State Transducer (FST). This data structure takes up very little space, and Lucene loads it fully into memory when it opens the index, speeding up on-disk dictionary queries and reducing the number of random disk accesses. Therefore, data storage of ES data node does not only consume disk space. In order to speed up data access, each segment has some index data residing in the heap. Therefore, the more segments the heap is allocated, the more heap is allocated, and the heap cannot be GC! This is important to understand when monitoring and managing cluster capacity. When a node has too much segment memory, it may need to consider deleting, archiving, or expanding.
Use the CAT API to view the segment memory usage of each node
GET _cat/nodes? v&h=name,port,smCopy the code
7. Query the aggregation unlimited heap size
The biggest headache is the memory footprint of query aggregation. This occupancy is highly correlated and uncertain depending on the statement of the query and your cluster distribution.
It is important to consider the impact of a query before you decide on it, so that you can adjust your cluster accordingly.
This involves indexing optimization and query statement optimization. The whole idea is to be as concise as possible and do only what is necessary.
bitset
Related optimization is not discussed in this article. I’ll consider writing an article about it later.
Optimized aggregate query
What do I take to prevent ES OOM? Circuit breaker!
Artifact!!!!! Official website pocket bottom! Circuit breaker setting
Elasticsearch contains multiple circuit breakers that are used to prevent an operation from causing an OutOfMemoryError. Each circuit breaker limits how much memory can be used. In addition, there is a parent circuit breaker that specifies the amount of memory available on all the circuit breakers.
1. Parent circuit breaker
The parent circuit breaker considers the actual memory usage (true); Only the quantity retained by the sub-circuit breaker is considered (false). The default is true.
indices.breaker.total.use_real_memory
Copy the code
Total chain breaker limit.
If indices. Breaker. Total. Use_real_memory: true default account for 70% of the heap if indices breaker. Total. Use_real_memory: false account for 95% of the heap by default
indices.breaker.total.limit
Copy the code
2. Field data circuit breaker
The Field Data Circuit breaker will estimate the heap memory required to load the Field into the Field Data cache. If loading the field will cause the cache to exceed the predefined memory limit, the breaker will stop the operation and return an error.
Request interrupt limit, which defaults to 60% of the JVM heap.
indices.breaker.fielddata.limit
Copy the code
A constant multiplied by all field data estimates to determine the final estimate. The default is 1.03
indices.breaker.fielddata.overhead
Copy the code
3. Request Circuit breaker Requests the data circuit breaker
A Request circuit breaker will prevent each Request data structure from exceeding a certain amount of memory. (For example, the memory used to compute the aggregation during the request)
Request interrupt limit, which defaults to 60% of the JVM heap
indices.breaker.request.limit
Copy the code
A constant by which all requested estimates are multiplied to determine the final estimate. The default is 1
indices.breaker.request.overhead
Copy the code
4. In flight requests circuit breaker
In Flight requests can be understood as:
The client has sent a request, but has not received a response from the server
In Flight requests circuit breaker allows Elasticsearch to limit the memory usage of all currently active transport requests or HTTP requests on a node so that it does not exceed the specific amount of memory available on the node. Memory usage depends on the length of the content of the request itself. The breaker also assumes that memory is needed not only to represent the original request, but also as a structured object, which is reflected by the default overhead.
This circuit breaker also considers that memory is not only needed for representing the raw request but also as a structured object which is reflected by default overhead.
The limit In Flight requests circuit breaker defaults to 100% of the JVM heap. This means that it is limited by the limitations of the configuration for the parent circuit breaker
network.breaker.inflight_requests.limit
Copy the code
A constant by which all in Flight request estimates will be multiplied to determine the final estimate. The default value is 2
network.breaker.inflight_requests.overhead
Copy the code
5. Accounting requests circuit breaker
Accounting requests circuit breaker allows Elasticsearch to limit the amount of memory that is not freed when a request completes. This includes Lucene Segment Memory.
The default is 100% of the JVM heap. This means that it is limited by the limitations of the configuration for the parent circuit breaker
indices.breaker.accounting.limit
Copy the code
The estimated coefficient, which defaults to 1
indices.breaker.accounting.overhead
Copy the code
6. Scrip compilation circuit breaker
The Scrip Compilation circuit breaker is slightly different from the memory-based circuit breaker described above in that it limits the number of times that inline scripts can be compiled over a period of time.
A limit on the number of dynamic scripts that are allowed to be unique at intervals in a given context. The default value is 75/5m, indicating 75 packets every 5 minutes
script.context.$CONTEXT.max_compilations_rate
Copy the code
4. Configuration suggestions
Finally to write configuration suggestions, a platitude, such articles online search a lot, I will pick the key to mention. But that’s too much to say.
Important configuration suggestions on the official website: You are advised to check
Heap memory: size and swap
Anyone who has ever played G1 or CMS should have some experience with tuning the JVM.
- The heap size should not exceed 32 gb. Here, 31 GB is a safe option because garbage collector does not have to reach the 32 GB threshold to turn off pointer compression, so set it to 31 GB if you want (pointer compression is an important point to be aware of).
- G1 is recommended for those above 8 GB. CMS is used by default for ES
- The JVM memory Settings do not exceed 50% of the physical machine memory. Why? Luene: I don’t want to look good?
- Cluster configuration and sharding do not talk about, things a lot of follow-up open pit ~
Tips, meal CAT API, and node monitoring
The official CAT API and node monitoring API are recommended. Here are a few common ones:
More: Nodes Info Api & Cat Apis
## cat api GET _cat/aliases GET _cat/indices? v GET _cat/shards? v GET _cat/nodes? v GET _cat/fielddata? v GET /_cat GET _cat/nodes? help GET _cat/nodes? v&h=name,port,hc,hm,rc,rm GET _cat/nodes? v&h=name,port,cpu,fm,qcm GET _cat/nodes? v&h=name,port,rcm,sm GET _cat/nodes? v&h=name,port,siwm,svmm,sfbm GET _cat/segments/hi_message? Human GET _cat/segments/hi_recent_chat GET /_cat/ml/ anomaly_emulsion GET /_cat/segments ## human&fields=* GET /_nodes/stats/indices/fielddata? human&level=indices&fields=* GET /_nodes/stats/indices/fielddata? human&fields=* GET /_stats/request_cache? human GET /_nodes/stats/indices/request_cache? human GET /_nodes/stats? human GET /_nodes/stats/jvm? human GET /_nodes/stats/os? human GET /_nodes/stats/process? human GET /_nodes/stats/indices/merge? human GET /_nodes/stats/breaker? humanCopy the code
conclusion
In general, the main memory usage points of ES are:
1. Node Query Cache Default 10% heap size 2. Indexing Buffer default 10% heap size 3 4. Field data Cache Default 0%~ Unlimited heap size 5. Indexing Pressure default 10% heap size 6 . Segment Memory unlimited heap size 7 . None Example Query aggregation unlimited heap size
It is important to remember to limit the size of your query text when performing ES queries. This prevents the load cache from causing infinite FULL GC and cluster hang-ups due to the large number of inverted indexes caused by word matching
ES officials also provide tools like circuit breakers to prevent OOM from happening. In the actual development, we still need to combine the actual query scene analysis, according to local conditions. Theory combined with actual combat, hope this article is helpful to your work and actual production.
I am aDying strandedI watched a 98-hour movie but still failed to get your likes, attentions and collections. I think it’s not that you don’t like me enough, but that I didn’t watch the movie long enough…