Instructions:

Kubernetes StatefulSet allows us to assign a stable identity and persistent store to a Pod. Elasticsearch needs stable storage to ensure Pod data remains unchanged after a Pod is rescheduled or restarted, so I’m using StatefulSet to manage pods here.

Deploy elasticSearch and use NFS to create persistent storage (StorageClass)

Environment:

Kubernetes version: V1.19.3

Docker version: 20.10.5

Vm: Four VMS

Master: 192.168.29.101

Node1:192.168.29.102

2:192.168.29.103

NFS server: 192.168.29.104

System: all the four VMS are centos7.9

Yaml file: /root/k8s/ elasticSearch

Steps:

cd /root/k8s/elasticsearch

1) Create a namespace: logging and assign all resources to this namespace

kubectl create ns logging

2) Create StorageClass

vim elasticsearch-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: es-data-db-elasticsearch
  namespace: logging
provisioner: fuseim.pri/ifs
reclaimPolicy: Retain
Copy the code

Note:

Provisioner the value of the provisioner field must be the same as later when elasticSearch Pod (StatefulSet) was created, otherwise the binding will fail

3) Create the RBAC

vim elasticsearch-storageclass-rbac.yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-provisioner-runner-elasticsearch
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner-elasticsearch
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner-elasticsearch
    namespace: logging
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner-elasticsearch
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner-elasticsearch
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-provisioner-elasticsearch
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: logging
roleRef:
  kind: Role
  name: leader-locking-nfs-provisioner-elasticsearch
  apiGroup: rbac.authorization.k8s.io
Copy the code

Note:

1, ServiceAccount, ClusterRoleBinding, ClusterRole, RoleBinding, and Role names should not conflict with the resource names in the existing environment running POD

2. Specify namespace as logging

3) Specify a mirror to associate with StorageClass

vim elasticsearch-storageclass-deploy.yaml

apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner-elasticsearch namespace: logging --- kind: Deployment apiVersion: apps/v1 metadata: name: nfs-provisioner-elasticsearch namespace: logging spec: replicas: 1 selector: matchLabels: app: nfs-provisioner-elasticsearch strategy: type: Recreate template: metadata: labels: app: nfs-provisioner-elasticsearch spec: serviceAccount: nfs-provisioner-elasticsearch containers: - name: nfs-provisioner-elasticsearch image: registry.cn-chengdu.aliyuncs.com/wangyunan_images_public/nfs-client-provisioner:v1 volumeMounts: - name: nfs-client-root-elasticsearch mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.29.104 - name: NFS_PATH value: / NFS /data/ ElasticSearch resources: requests: CPU: 100M memory: 50Mi limits: CPU: 100M Memory: 50Mi volumes: -name: nfs-client-root-ElasticSearch NFS: server: 192.168.29.104 Path: /nfs/data/elasticsearchCopy the code

Note:

1. Fuseim. Pri /ifs specifies the value of the provisioner field specified when elasticSearch -storageclass. Yaml was created

NFS_SERVER Specifies the IP address of your NFS VM

The NFS_PATH path must be created in advance on your NFS server, otherwise an error will be reported during pod creation

Create elasticSearch service (headless)

vim elasticsearch-svc.yaml

kind: Service
apiVersion: v1
metadata:
  name:  elasticsearch
  namespace: logging
spec:
  selector:
    app:  elasticsearch
  clusterIP: None
  ports:
  - name:  rest
    port:  9200
  - name: inter-node
    port: 9300
Copy the code

Create ElasticSearch Pod (statefulSet)

vim elasticsearch-statefulset.yaml

apiVersion: apps/v1 kind: StatefulSet metadata: name: es-cluster namespace: logging spec: serviceName: elasticsearch selector: matchLabels: app: elasticsearch replicas: 3 template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: Registry.cn-chengdu.aliyuncs.com/wangyunan_images_public/elasticsearch-oss:6.4.3 ports: - containerPort: 9200 name: rest protocol: TCP - name: inter-node containerPort: 9300 protocol: TCP volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data env: - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: discovery.zen.ping.unicast.hosts value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch" - name: discovery.zen.minimum_master_nodes value: "2" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx512m" - name: Network. Host value: "0.0.0.0" initContainers: - name: fix-permissions image: busybox command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data - name: increase-vm-max-map image: busybox command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true - name: increase-fd-ulimit image: busybox command: ["sh", "-c", "ulimit -n 65536"] securityContext: privileged: true volumeClaimTemplates: - metadata: name: data labels: app: elasticsearch spec: accessModes: [ "ReadWriteOnce" ] storageClassName: es-data-db-elasticsearch resources: requests: storage: 8GiCopy the code

6) Create resource:

kubectl create -f .

7) Verification:

Check the Elasticsearch cluster by requesting a REST API from the master node:

kubectl port-forward es-cluster-0 9200:9200 –namespace=logging

Restart a terminal and run the following command:

curl http://localhost:9200/_cluster/state? pretty

Node1 and node2 are running out of memory. The pod has been restarted repeatedly and returns a blank message. Please check whether the node has sufficient memory.

Hot IT post highlights: docs.qq.com/sheet/DVWps…