preface

Es performance may not be as good as you think. Many times the amount of data is large, especially when there are hundreds of millions of pieces of data, you may find that the meng force, run a search how to 5~10s, pit dad. For the first time, it was 5 to 10 seconds, but then it was faster, maybe a few hundred milliseconds.

In order to stop being confused about these problems, I will talk about the possible optimization and solutions in ES in several parts.

Filesystem Cache is the ultimate performance optimization tool

The data you write to es is actually written to disk files, which are automatically cached in Filesystem Cache by the operating system.

The es search engine relies heavily on the underlying Filesystem cache. If you add as much memory to filesystem cache as possible to accommodate all idX segment files, your search will be based on the underlying filesystem cache. The performance will be very high.

How big can the performance gap be? Many of our previous tests and pressure tests, if the disk is generally certain on the second, search performance is absolutely second level, 1 second, 5 seconds, 10 seconds. Filesystem cache, however, provides an order of magnitude higher performance than disk cache, ranging from a few milliseconds to hundreds of milliseconds.

Here’s a real case. A company es node has 3 machines, each machine seems to have a lot of memory, 64GB, the total memory is 64 * 3 = 192G. The ES JVM heap for each machine is 32 GB, leaving only 32 GB for filesystem cache per machine. The total amount of memory allocated to Filesystem cache is 32 x 3 = 96 GB. At this time, the index data files on the whole disk occupy a total of 1T disk capacity on the three machines. If the es data amount is 1T, then the data amount on each machine is 300G. Is it a good performance? Filesystem Cache has only 100 gigabytes of memory, so one-tenth of the data can be stored in memory, and the rest is stored on disk. Then you perform searches, most of which are performed on disk, and performance is poor.

At the end of the day, you want es to perform well, and in the best case, your machine’s memory should hold at least half of your total data.

As a rule of thumb, it is best to store only a small amount of data in ES, the indexes you want to search for. If filesystem cache has 100 GIGABytes of memory, you want to limit the index data to 100 gigabytes, so that almost all of your data is searched in memory. The performance is very high, generally can be less than 1 second.

Let’s say you have a row of data. id,name,age …. 30 fields. But if you’re searching now, you just need to search by id,name,age. If you foolishly write a single line of data into an ES, all of the fields will result in 90% of the data being unsearchable, occupying filesystem cache space on the ES machine. This causes filesystem CAhCE to cache less data. You can write only the few fields in es that you want to retrieve. For example, you can write es ID,name, and age. Then you can store the rest of the fields in mysql/hbase.

Hbase is suitable for online storage of massive data. Massive data can be written to hbase, but simple operations such as query based on ID or range do not need to be performed. If you search es based on name and age, you may get 20 doc ids. Then, you can query the complete data corresponding to each DOC ID in hbase based on the DOC ID, and then return it to the front end.

The size of filesystem cache must be smaller than or slightly larger than the size of FILESYSTEM cache. Then it may take you 20ms to retrieve data from ES, and then you can query 20 pieces of data in hbase based on the ID returned by ES. It may take only 30ms. You may play with ES for 1T of data, and the query will be 5~10s each time. Each query is 50ms.

Data preheating

Even if you did this, each machine in the ES cluster would write twice as much data as the Filesystem cache. For example, if you wrote 60 gigabytes of data to a machine, the Filesystem cache would write 30 gigabytes. There are still 30 gigabytes of data left on disk.

You can actually do data preheating.

For example, take Weibo as an example. You can set up a system in the background in advance to search for hot data and flush it into Filesystem cache every few moments. Later, when users actually read the hot data, They just ran it through memory. Pretty soon.

Or for e-commerce, you can create a program in the background that automatically accesses the hot data of some items you normally view most, such as the iPhone 8, every minute, and flush it into filesystem Cache.

For data that you think is hot and that is frequently accessed, it is best to create a special cache warm-up subsystem, in which hot data is accessed at regular intervals in advance and stored in filesystem cache. The next time someone accesses it, the performance will be much better.

Hot and cold separation

Es can do a horizontal split similar to mysql, that is, a large amount of data that is accessed infrequently, a separate index, and then hot data that is accessed frequently, a separate index. It is best to write cold data to one index and hot data to another to ensure that hot data is kept in the Filesystem OS cache as much as possible after being warmed up.

You see, suppose you have six machines, two indexes, one for cold data, one for heat data, and three shards for each index. Heat release data INDEX of 3 machines and cooling data index of the other 3 machines. However, if you spend most of your time accessing the hot index data, the hot data may account for 10% of the total data volume, and the amount of data is small enough to be retained in filesystem cache, ensuring high hot data access performance. But for the cold data, it’s in a different index, it’s not on the same machine as the hot data index, so we don’t have any contact with each other. If someone accesses cold data, maybe a lot of the data is on disk, and the performance is pretty low, then 10% of the people are accessing cold data, 90% of the people are accessing hot data, and it doesn’t matter.

Document Model design

With MySQL, we often have complex associative queries. How to play in ES, ES inside the complex associated query as far as possible do not use, once used performance is generally not good.

It is best to complete the association in the Java system first and write the associated data directly to ES. When searching, there is no need to use es search syntax to complete associative search like join.

Document model design is very important, a lot of operations, don’t want to perform all sorts of complicated messy operations when searching. There are only so many operations that ES can support. Don’t even think about using ES to do things that it can’t do well. If you do that, try to do it when you design the Document model, when you write it. In addition, some complicated operations, such as join/nested/parent-child search, should be avoided as much as possible, because the performance will be poor.

Paging performance optimization

Es is a bad pagination, why? For example, if you have 10 pieces of data per page, and you want to query page 100, you’re actually looking up the first 1000 pieces of data stored on each shard to a coordination node, so if you have 5 shards, that’s 5,000 pieces of data, Then coordinate the nodes to merge and process the 5000 pieces of data, and finally obtain 10 pieces of data on page 100.

Distributed, you want to look up 10 pieces of data on page 100, you can’t say that from 5 shards, each shard looks up 2 pieces of data, and then finally the coordinated node merges into 10 pieces of data. You have to look up 1,000 entries from each shard, sort, filter, and so on according to your needs, and then page out again to get to page 100. As you turn pages, the deeper you go, the more data each shard returns, and the longer it takes to coordinate node processing, which is very frustrating. So when you do pages in ES, you’ll see that as you go to the back, it slows down.

We have also encountered this problem before, using ES as a page, the first few pages will be tens of milliseconds, when turning to 10 or dozens of pages, it will take 5 to 10 seconds to find a page of data.

Is there a solution?

Deep paging is not allowed (the default deep paging performance is poor)

Tell the product manager that your system does not allow you to turn pages that deep, and the deeper you turn pages by default, the worse your performance will be.

It’s kind of like the recommendation items in the app that keep pulling down page after page

Just like in weibo, you can scroll down to scroll page by page, and you can use scroll API to search online for how to use it.

Scroll is going to give you a snapshot of all of your data at once, and then every time you scroll backwards you’re going to scroll through the cursor scroll_id, get next page next page, performance is going to be much, much better than the paging performance that I’ve been talking about, basically in milliseconds.

The only thing, though, is that it works in a twitter-like pull-down page-flipping scenario where you can’t jump to any page. That is, you can’t go to page 10, then to page 120, and then back to page 58. So a lot of products right now, they don’t allow you to turn pages, apps, and some websites, all you can do is scroll down, page by page.

The scroll parameter must be specified during initialization to tell ES how long to save the context of this search. You need to make sure that users don’t keep scrolling for hours on end, or they may fail due to timeout.

In addition to using the Scroll API, you can also use search_after. The idea is to use the results of the previous page to help retrieve the data on the next page. Obviously, this method also doesn’t allow you to flip through pages, you just have to scroll backwards. At initialization, you need to use a field with a unique value as the sort field.