The log system Kibana persisted for a long time and finally collapsed, reporting error 503 429 430, etc

  • The datasource is collected by FileBeat and sent to LogStash. The LogStash index is created on ElasticSearch, and the ElasticSearch is sent to Kibana

  • Index health and sharding instructions
Index # # checked to see the curl -u elastic: mHf8plOIOosUVdT1DUvI - XGET 'http://localhost:9200/_cat/indices? v' curl -u elastic:mHf8plOIOosUVdT1DUvI -XGET localhost:9200/_cat/allocation? V # # to check the data curl -u elastic: * / _search mHf8plOIOosUVdT1DUvI http://localhost:9200/es-message- http://localhost:9200/_cat/shards # # is any node configuration resource size curl -u elastic: mHf8plOIOosUVdT1DUvI - XGET http://localhost:9200/_cat/nodes? h=heap.max curl -u elastic:mHf8plOIOosUVdT1DUvI -XGET http://localhost:9200/_cat/thread_pool/write? v curl -u elastic:mHf8plOIOosUVdT1DUvI http://localhost:9200/_cluster/health? level=indices GET _cluster/health? Level indices of = # # to check the state of the cluster curl -u elastic: mHf8plOIOosUVdT1DUvI http://localhost:9200/_cluster/health? Pretty # # to check the state of fragmentation curl -u elastic: mHf8plOIOosUVdT1DUvI - XGET 'http://localhost:9200/_cat/shards? V '# # clear filebeat cache curl -u elastic: mHf8plOIOosUVdT1DUvI XPOST - http://localhost:9200/_all/_cache/clear? Fielddata = true # # emptying index data curl -u elastic: mHf8plOIOosUVdT1DUvI - XPOST http://localhost:9200/quality_control/my_type/_delete_by_query? refresh&slices=5&pretty { "query": { "match_all": {}}} # # to delete a single index curl -u elastic: mHf8plOIOosUVdT1DUvI - XDELETE http://localhost:9200/prod-228-wx-system * # # to check the node curl -u elastic:mHf8plOIOosUVdT1DUvI -XGET 'http://localhost:9200/_cat/nodes'Copy the code
  • Check the status of each index and see that most of them are yellow and many are red

  • Use the above instructions to clear the relevant indexes, but the effect is not obvious, and finally all clean up and add again.
  • Add index to elasticSearch.yml
  • Close circuit check:
indices.breaker.type: none
Copy the code
  • Each index has a backup shard, but this shard is not allocated. That is, there is unassigned Shards. Use the following command to check the reason and quantity of unassigned shards.
GET _cat/shards? h=index,shard,prirep,state,unassigned.reasonCopy the code

Use the following command for bulk indexing to set the number of shards to 0

PUT /prod-*/_settings
{
  "number_of_replicas": 0
}

PUT /test-*/_settings
{
  "number_of_replicas": 0
}
Copy the code

** We use the command again to check the cluster health

GET /_cluster/health? Pretty ## View cluster healthCopy the code

  • You can see that the cluster is in a healthy state. Let’s look at the status of each index.
GET /_cat/indices? V ## View the running status without indexesCopy the code

We can see that the states are all green ~

Redis and ElasticSearch crash cleanup

  • Clean up redis data and all keys
redis-cli -h hostaddress
Copy the code
  • Kibana error after adding normal permission account (cause super administrator login again error) :

{“statusCode”:403,”error”:”Forbidden”,”message”:”Forbidden”}

Add kibanA_user when adding a role to a normal permission account

  • The xpack file may be lost after all indexes in elasticSearch are cleared. You can run the following command to log in to elasticsearch:
Bin/elasticSearch-setup-passwords auto -- Passwords are automatically set. Bin /elasticsearch-setup-passwords interactive -- Passwords are manually setCopy the code