Preface:
Kubernetes version 1.16 had installed Elastic on Kubernetes (ECK) version 1.0. Storage has been running with Local Disk documents for over a year. The version of Elasticsearch is 7.6.2. You have now completed the Kubernetes 1.20.5 Containerd Cilium Hubble The construction of the environment (https://blog.csdn.net/saynaihe/article/details/115187298) and integrates the CBS tencent cloud storage (https://blog.csdn.net/saynaihe/article/det Ails / 115212770). ECK has also been updated to version 1.5 (can I say it was 1.4.0 when I installed it the day before yesterday….. Fortunately, I am just a simple application without too complex changes but version changed…. Then let’s do it again) the earliest kubernetes1.16 version of the deployment of eck installation eck1.0 version of https://duiniwukenaihe.github.io/2019/10/21/k8s-efk/ built many years ago.
About the eck
Elastic Cloud on Kubernetes is an Operator installation that greatly simplifies application deployment. The same goes for the promtheus-operator. According to official document https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html deployment mode.
1. Deploy ECK in the Kubernete cluster
1. InstallCustom resource definitionsAnd the operator and its RBAC rules:
kubectl apply -f https://download.elastic.co/d…
2. Monitor operation log:
Kubectl – n elastic – system logs – f statefulset. Apps/elastic – operator * * — — — — — — — — — — — — — — — — — — — — — — — the delimiter — — — — — — — — — — — — — — — — — I downloaded YAML directly to the local area.
###至于all-in-one.yaml.1后面的1可以忽略了哈哈,第二次加载文件加后缀了。
kubectl apply -f all-in-one.yaml
kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
2. Deploy the ElasticSearch cluster
1. Customize the ElasticSearch image
Add the S3 plugin, change the time zone East 8, and add the secret key of Tencent Cloud COS, and repackage the ElasticSearch image.
1. DockerFile as follows
The FROM docker. Elastic. Co/elasticsearch/elasticsearch: 7.12.0 ARG ACCESS_KEY = XXXXXXXXX ARG SECRET_KEY = XXXXXXX ARG ENDPOINT=cos.ap-shanghai.myqcloud.com ARG ES_VERSION=7.12.0 ARG PACKAGES="net-tools lsof" ENV allow_insecure_settings 'true' RUN rm -rf /etc/localtime && cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime RUN echo 'Asia/Shanghai' > /etc/timezone RUN if [ -n "${PACKAGES}" ]; then yum install -y $PACKAGES && yum clean all && rm -rf /var/cache/yum; fi RUN \ /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch repository-s3 && \ /usr/share/elasticsearch/bin/elasticsearch-keystore create && \ echo "XXXXXX" | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.access_key && \ echo "XXXXXX" | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin s3.client.default.secret_key
2. Pack the image and upload it to Tencent Cloud Mirror Warehouse, or other private warehouses
Docker build - t ccr.ccs.tencentyun.com/xxxx/elasticsearch:7.12.0. Docker push ccr.ccs.tencentyun.com/xxxx/elasticsearch:7.12.0
2. Create the ElasticSearch Deploy YAML file and deploy the ElasticSearch cluster
Modified its own packaging Image Tag, using Tencent Cloud CBS CSI block storage. The deployment namespace is defined and the namespace logging is created.
1. Create the namespace to deploy the Elasticsearch application
kubectl create ns logging
cat <<EOF > elastic.yaml apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: Elastic namespace: logging spec: version: 7.12.0 image: ccr.ccs.tencentyun.com/XXXX/elasticsearch:7.12.0 HTTP: TLS: selfSignedCertificate: disabled: true nodeSets: - name: laya count: 3 podTemplate: spec: containers: - name: ELSticsearch env: -name: ES_JAVA_OPTS value: -xms2g -xmx2g resources: requests: memory: 4Gi CPU: 0 limits: memory: 1 4Gi cpu: 2 config: node.master: true node.data: true node.ingest: true node.store.allow_mmap: false volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce storageClassName: cbs-csi resources: requests: storage: 200Gi EOF
2. Deploy the YAML file and view the application deployment status
kubectl apply -f elastic.yaml
kubectl get elasticsearch -n logging
kubectl get elasticsearch -n logging
kubectl -n logging get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=elastic'
3. Get ElasticSearch credentials
kubectl -n logging get secret elastic-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
4. Install Kibana directly
(Elasticsearch) (Elasticsearch) (Elasticsearch) (Elasticsearch) (Elasticsearch) (Elasticsearch) Reference to https://www.elastic.co/guide/en/cloud-on-k8s/1.4/k8s-kibana-http-configuration.html about selfSignedCertificate reasons.
cat <<EOF > kibana.yaml apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: elastic namespace: Logging spec: version: 7.12.0 image: docker. Elastic. Co/kibana/kibana: 7.12.0 count: 1 elasticsearchRef: name: elastic http: tls: selfSignedCertificate: disabled: true podTemplate: spec: containers: - name: kibana env: - name: I18N_LOCALE value: zh-CN resources: requests: memory: 1Gi limits: memory: 2Gi volumeMounts: - name: timezone-volume mountPath: /etc/localtime readOnly: true volumes: - name: timezone-volume hostPath: path: /usr/share/zoneinfo/Asia/Shanghai EOF
kubectl apply kibana.yaml
5. Outward mapping of Kibana services
Exposure is done using the Traefik HTTPS proxy, and the TLS secret is added to the namespace. Bind the internal Kibana Service. Then the external SLB UDP proxy port 443. But now Tencent cloud SLB can mount multiple certificates, this layer is stripped away, directly HTTP mode to port 80. The HTTPS certificates are then all over the SLB load balancing agent. This saves you the worry of certificate management, and also allows you to collect access layer logs directly from the SLB layer to the COS. And can use Tencent cloud own log service.
cat <<EOF > ingress.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: kibana-kb-http
namespace: logging
spec:
entryPoints:
- web
routes:
- match: Host(`kibana.XXXXX.com`)
kind: Rule
services:
- name: elastic-kb-http
port: 5601
EOF
kubectl apply -f ingress.yaml
Enter the username Elastic password as the Elasticsearch credentials you got above to go to the admin page. The new interface is cool
6. Now add the snapshot repository
Create a snapshot repository in the same way as S3, you can refer to the detailshttps://blog.csdn.net/ypc123ypc/article/details/87860583This blog
PUT _snapshot/esbackup
{
"type": "s3",
"settings": {
"endpoint":"cos.ap-shanghai.myqcloud.com",
"region": "ap-shanghai",
"compress" : "true",
"bucket": "elastic-XXXXXXX"
}
}
OK to verify that the snapshot repository was added successfully
Let’s go back to one. Okay?
Can be ok? Wait for the green
Almost done. It can be used normally. There are a lot of things to note about the process of using it. The key is the cluster design planning. Estimated increases in data are also alarming. Next time we have time to list the deployment of ElastAlert in Kubernetes.