Elasticsearch (61)
Index management _ Create, modify and delete indexes quickly
Create indexes
Syntax for creating indexes
PUT /my_index
{
"settings": {...any settings ... },
"mappings": {
"type_one": {...any mappings ... },
"type_two": {...anymappings ... },... }}Copy the code
Example of creating an index
PUT /my_index
{
"settings": {
"number_of_shards": 1."number_of_replicas": 0
},
"mappings": {
"my_type": {
"properties": {
"my_field": {
"type": "text"}}}}} {"acknowledged": true,
"shards_acknowledged": true
}
Copy the code
Modify the index
PUT /my_index/_settings
{
"number_of_replicas": 1
}
GET /my_index
{
"my_index": {
"aliases": {},
"mappings": {
"my_type": {
"properties": {
"my_field": {
"type": "text"}}}},"settings": {
"index": {
"creation_date": "1629427405603"."number_of_shards": "1"."number_of_replicas": "1"."uuid": "yayCFwRaTyK3bTWrzf41xw"."version": {
"created": "5020099"
},
"provided_name": "my_index"}}}}Copy the code
Remove the index
DELETE /my_index
DELETE /index_one,index_two
DELETE /index_*
DELETE /_all
Copy the code
Specified in the configuration file
Configuration file: elasticSearch.yml action.destructive_requires_name: trueSelect * from index where name = '_all'
Copy the code
Elasticsearch
Index management _ quick on the machine hands-on practice to modify the word segmentation and custom their own word segmentation
Default word divider
- Standard Tokenizer: Shards by word boundaries
- Standard token filter: do nothing
- Lowercase token filter: converts all letters to lowercase
- Stop Token filer (disabled by default) : Remove stop token filer, such as a the it, etc
Modify the Settings of the word divider
Enable the English stop word token filter
PUT /my_index
{
"settings": {
"analysis": {
"analyzer": {
"es_std": {
"type": "standard"."stopwords": "_english_"}}}}} {"acknowledged": true,
"shards_acknowledged": true
}
Copy the code
- Conduct word segmentation examples
GET /my_index/_analyze
{
"analyzer": "standard"."text": "a dog is in the house"
}
{
"tokens": [{"token": "a"."start_offset": 0."end_offset": 1."type": "<ALPHANUM>"."position": 0
},
{
"token": "dog"."start_offset": 2."end_offset": 5."type": "<ALPHANUM>"."position": 1
},
{
"token": "is"."start_offset": 6."end_offset": 8."type": "<ALPHANUM>"."position": 2
},
{
"token": "in"."start_offset": 9."end_offset": 11."type": "<ALPHANUM>"."position": 3
},
{
"token": "the"."start_offset": 12."end_offset": 15."type": "<ALPHANUM>"."position": 4
},
{
"token": "house"."start_offset": 16."end_offset": 21."type": "<ALPHANUM>"."position": 5}}]Copy the code
- Use the above custom ES_std segmentation
GET /my_index/_analyze
{
"analyzer": "es_std"."text":"a dog is in the house"
}
{
"tokens": [{"token": "dog"."start_offset": 2."end_offset": 5."type": "<ALPHANUM>"."position": 1
},
{
"token": "house"."start_offset": 16."end_offset": 21."type": "<ALPHANUM>"."position": 5}}]Copy the code
Customize your own segmentation
PUT /my_index_2
{
"settings": {
"analysis": {
"char_filter": {
"&_to_and": {
"type": "mapping"."mappings": ["&=> and"]}},"filter": {
"my_stopwords": {
"type": "stop"."stopwords": ["the"."a"]}},"analyzer": {
"my_analyzer": {
"type": "custom"."char_filter": ["html_strip"."&_to_and"].# Remove the HTML tag
"tokenizer": "standard"."filter": ["lowercase"."my_stopwords"]}}}}}Copy the code
- Example shows
GET /my_index_2/_analyze
{
"text": "tom&jerry are a friend in the house, <a>, HAHA!!"."analyzer": "my_analyzer"
}
{
"tokens": [{"token": "tomandjerry"."start_offset": 0."end_offset": 9."type": "<ALPHANUM>"."position": 0
},
{
"token": "are"."start_offset": 10."end_offset": 13."type": "<ALPHANUM>"."position": 1
},
{
"token": "friend"."start_offset": 16."end_offset": 22."type": "<ALPHANUM>"."position": 3
},
{
"token": "in"."start_offset": 23."end_offset": 25."type": "<ALPHANUM>"."position": 4
},
{
"token": "house"."start_offset": 30."end_offset": 35."type": "<ALPHANUM>"."position": 6
},
{
"token": "haha"."start_offset": 42."end_offset": 46."type": "<ALPHANUM>"."position": 7}}]Copy the code
- The participle specified in our particular type
PUT /my_index_2/_mapping/my_type
{
"properties": {
"content": {
"type": "text"."analyzer": "my_analyzer"}}}Copy the code