Background of generation
Shay Banon used an early version of Lucene to build a recipe search engine for his wife. Since Lucene was difficult to use, he built an abstraction layer in Java so that Java developers could easily add search capabilities to their programs. He released his first open source project, Compass, and decided to rewrite Compass as a standalone service to address the need for a high-performance, real-time, distributed search engine scenario, so that developers could communicate with the simple RESTful apis it provided through their programs. Name it Elasticsearch.
What does it do
- A distributed real-time document storage, per fieldIt can be indexed and searched
- A distributed real-time analysis search engine
- Capable of scaling hundreds of service nodes and supporting __PBLevel of structured or unstructured data
Principles within a cluster
ElasticSearch is designed to be available on demand and scalable on demand. Expansion can be achieved by buying more powerful or more servers.
- The cluster effect
An instance node can form a cluster that can collect data from other nodes regardless of which node is requested.
- Cluster health
Green All master and replica shards are running properly.
All yellow master shards work normally, but not all yellow replica shards work normally.
Red has primary sharding not working properly.
- The role of indexes
An index is where Elasticsearch stores data, and an index is actually a logical namespace that points to one or more physical shards.
- failover
When only one node is running in a cluster, that means there is a single point of failure — no redundancy. We only need to start one more node to prevent data loss.
- Levels increase
The number of replica shards can be dynamically adjusted on a running cluster, and we can scale the cluster on demand.
- Deal with failure
If a single node fails, the other nodes store a full copy of the shard on the current node. When the failed node recovers, the cluster can redistribute the missing fragments.
Principles of distributed document storage
- Route a document to a shard
Shard = hash(routing) % number_of_primary_shards
Routing is a variable value and defaults to the document _id
Number_of_primary_shards Number of primary fragments
All document apis (GET, index, DELETE, BULK, Update, mGET) accept one called Routing
.
- How do master and replica shards interact
Each shard knows exactly where the document is and coordinates the shard through the master node.
- New index and delete document (write operation)
Locate the master shard through the file. After the master shard operation succeeds, the command is issued to operate the copy shard.
Consistency:
Master shards require a quorum even before attempting to perform an _ write _ operation. The formula is: int((primary + number_of_replicas) / 2) + 1
Timeout: If the number of fragments is insufficient, the operation is completed in 1 minute by default. You can set this parameter to end the operation earlier.
- Fetch a document
Files can be retrieved by any shards, and the coordinating node performs load balancing among shards by polling.
- Partially updated document
After the master shard retrieves the document, it modifies the JSON in _source. If the document is being modified, the master shard waits through the retry mechanism and ends after the retrY_on_conflict times. After knowing that all the copy sharding operations are successful, the master shard returns a success message.
- Multi-document mode
The coordination node breaks down a single document request into multiple document requests per shard, and after processing, the response is collated and returned to the client.
Elasticsearch can directly read the raw data received by the network buffer. The coordinator node uses the newline character to identify and parse small action/metadata rows to determine which shard should process each request.
Implement distributed retrieval principles
A CRUD operation processes only a single document, whose uniqueness is determined by a combination of _index, _type, and routing values (usually the default is the _id of the document).
- The query phase
Priority queue: A priority queue is simply an ordered list of top-N matching documents. The size of the priority queue depends on the paging parameters FORM and size.
When the request is sent to the node, an empty priority queue of form+size is created. Each shard adds the query result to the local priority queue. Each shard returns the ID and sort value of all documents in its priority queue to the coordination node. The coordination node returns the global result of the merge sort.
- Retrieve the phase
- The coordination node identifies which documents need to be retrieved and submits multiple documents to the related shard
GET
The request. - Each shard loads and enriches the document, then returns the document to the coordination node if necessary.
- Once all documents have been retrieved, the coordination node returns the result to the client.
Deep Pagination:
Depending on the size of your document, the number of shards, and the hardware you use, deep paging (1,000 to 5,000 pages) for 10,000 to 50,000 results is perfectly feasible. But with sufficiently large form values, the sorting process can become very heavy, using a lot of CPU, memory, and bandwidth.
- Search options
How many query parameters can affect the search process?
Preference: The preference parameter allows you to control which shards or nodes are used to process search requests. Bouncing Results: Imagine having two documents with the same timestamp field. Search Results are sorted using the TIMESTAMP field. Since search requests are polled across all valid shard copies, it is possible that the two documents will occur in one order for the master shard request and in another order for the replica shard request.
Timeout Indicates the maximum time allowed for fragment processing.
Route: Locates fragments through routes.
Search type: The global word frequency can be calculated by retrieving the word frequency from all relevant shards.
- Cursor query scroll
Effectively perform large volume document queries without the cost of deep paging. Cursor queries allow us
Internal fragmentation principle
- Inverted index
An inverted index consists of a list of all non-repeating words in a document, and for each word there is a list of documents containing it.
- Make text searchable
Full-text retrieval builds a large inverted index for the entire document collection and writes it to disk. Once the new index is ready, the old one is replaced by it so that the most recent changes can be retrieved
- invariance
- No locks required. If you never update indexes, you don’t need to worry about multiple processes modifying data at the same time.
- Once the index is read into the kernel’s file system cache, it stays there because of its immutability. As long as there is enough space in the file system cache, most read requests go directly to memory and do not hit disk. This provides a significant performance boost.
- Other cache(likefilterThe cache)Is valid for the lifetime of the index. They do not need to be rebuilt every time the data changes, because the data does not change.
- Writing to a single large inverted index allows data to be compressed, reducing diskI/OAnd the amount of indexes that need to be cached into memory.
- Dynamic update index
Add new supplementary indexes to reflect recent changes, rather than rewriting the entire inverted index. Each inverted index is queried in turn – starting with the earliest – and then the results are merged.
The new document is collected into the in-memory index cache, and a new segment – an appended inverted index – is written to disk. A new commit point with the new segment name is written to disk. Disk synchronization – All writes waiting in the file system cache are flushed to disk to ensure they are written to physical files. New stage development allows files to be retrieved and memory emptied waiting for new content.
- Near real time search
New segments are written to the file system cache first — a cheap step — and flushed to disk later — a costly step. But once the file is already in the cache, it can be opened and read like any other file.
- Persistent change
Translog Transaction logs, which are logged every time an operation is performed on a node. After a file is indexed, it is put into the memory buffer and the transaction log is appended. The transaction log is always there, and when fsync executes flush to disk, the transaction log is deleted.
- Period of consolidation
The approximate matching
Understanding the relationship between participles is a complex problem, which can not be solved by a different query method. But we can at least judge some seemingly related participles by the ones that appear near each other, or even just next to each other.
- Phrases match
A document identified as matching the phrase Quick Brown fox:
- quick,brownandfoxThey all need to be in the domain.
- brownThe position should be better thanquickThe location of the big1.
- Fox’s position should be 2 times larger than Quick’s.
- mix
We can introduce flexibility into phrase matching by using the SLOp parameter, which tells Match_PHRASE how far apart the query terms are and still treat the document as a match.
- Multi-value field
The simple solution to position_increment_gap is to first remove mapped groups and all documents within that type, and then create a new mapped group with the correct value.
- The closer the better
Whereas a phrase query only excludes documents that do not contain the exact query phrase, a proximity query – a phrase query with a SLOP greater than 0 – takes the proximity of the query term into account in the final relevancy _score. By setting a high SLOP value like 50 or 100, you can rule out documents where words are too far apart, but also give higher scores to documents where words are near.
- Use neighborhood progress to increase relevancy
We can use a simple match query as a must clause. This query will determine which documents need to be included in the result set. We can remove the long tail with the minimum_should_match parameter. And then we can start with should
- Performance optimization method
Both phrase and neighborhood queries are more expensive than simple Query queries. A match query simply looks to see if an entry exists in an inverted index, whereas a match_PHRASE query must calculate and compare the positions of multiple potentially duplicate terms. Lucene nightly Benchmarks indicate that a simple term query is about 10 times faster than a phrase query, and about 20 times faster than a neighboring query (a phrase query with SLOP). This cost, of course, refers to searching rather than indexing.
- Look for related words
Shingles need to be created as part of the analysis process at index time. It is possible to index both unigrams and Bigrams into a single field, but it is clearer to keep them separately in fields that can be queried independently. The unigrams field will form the basis of our search, while the Bigrams field will increase relevancy.
Shingles is not only more flexible than phrase queries, but also performs better. Shingles queries are just as efficient as a simple match query without the cost of phrase queries per search. There is only a small price to pay for more terms to be indexed during indexing, which also means that fields with Shingles take up more disk space. However, most applications write once and read many times, so it makes sense to optimize our query speed during indexing.