This is the 24th day of my participation in Gwen Challenge

21 Next generation monitoring architecture

21.1 Pipeline of core indicators

Consisting of Kubelet, Metrics-Server, and apis provided by Apiserver; Total CPU usage, real-time memory usage, POD resource usage, and container disk usage.

  • Metrics-server (a new generation of resource metrics acquisition method)

It is an Apiserver, it is only used to serve the core metrics service, it is not part of K8S, it is only hosted on THE K8S POD.

The k8S apiserver and metrics-Server apiserver front-end should have a proxy server, which is an aggregator that aggregates different Apiservers into one. It is kube – aggregator, after its polymerization of API we will pass/apis/metrics k8s. IO/v1 / walk to get.

21.2 Monitor the assembly line

It is used to collect various indicator data from the system and provide end users, storage systems, and HPA. It contains core indicators and non-core indicators. Non-core indicators cannot be understood by K8S

  • prometheus

The second major project under CNCF, collecting indicators of various dimensions,

It collects information to determine whether to perform HPA (automatic scaling) as a criterion

Prometheus is used both as a monitoring system and as a provider of specific indicators, but if it wants to be used as a specific indicator for a mechanism such as HPA, it needs to convert the format, and a plug-in for this conversion is called K8S-Promethes-Adapter.

21.3 the metrics – server installation

  • The official warehouse, I’m using the first one here
https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B      # plugin official address
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server    # k8S official plugin example
Copy the code
  • To install the deployment-related file: /tree/master/deploy/, modify the metrics-server-deployment.yaml file
      containers:
      - name: metrics-server
        image: K8s. GCR. IO/metrics - server - amd64: v0.3.1
        imagePullPolicy: Always
        args:                                               # add parameters
        - '--kubelet-preferred-address-types=InternalIP'    Use IP instead of hostname
        - '--kubelet-insecure-tls'                          Do not validate client certificates
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

Copy the code
$ kubectl apply -f ./
Copy the code
  • Check the startup status of POD and Service
$ kubectl get pods -n kube-system
$ kubectl get svc -n kube-system
Copy the code
  • See if metrics.k8s. IO /v1beta1 exists in the API
$ kubectl api-versions
Copy the code
  • Kubectl proxy –port 8080, kubectl Top can also be used normally
$curl http://127.0.0.1:8080/apis/metrics.k8s.io/v1beta1 $kubectl top nodesCopy the code

21.4 installation Prometheus

  • The working principle of
Prometheus pulls data from each job /exporter via the pull metrilcs directive. Other short-lived Jobs may also send data to pushGateway. Passively received by Prometheus - Prometheus itself implements a time series database, Where data is stored - in K8S service Discovery is used to discover services and obtain targets to monitor - apiclient, webui, Grafana are used to display data from Prometheus - When an alarm is needed it's also pushed to the AlertManager component that sends the alarmCopy the code
  • Deployment file
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus
https://github.com/iKubernetes/k8s-prom
Copy the code

21.5 HPA Cli Mode

  • Create POD and Service
kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m',memory='256Mi' --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80
Copy the code
  • Example Creating an HPA controller
kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=60
Copy the code
  • To view HPA controller, kubectl get HPA
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myapp   Deployment/myapp   0%/60%    1         8         1          17s
Copy the code
  • Start stress test
Ab - 100 - c n 5000000 http://172.16.100.102:32749/index.htmlCopy the code
  • The test result indicates that automatic capacity expansion takes effect
$ kubectl get hpa -w NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE myapp Deployment/myapp 0%/60% 1 8 1 7m35s myapp  Deployment/myapp 34%/60% 1 8 1 9m58s myapp Deployment/myapp 102%/60% 1 8 1 11m myapp Deployment/myapp 102%/60% 1 8 2 11m myapp Deployment/myapp 96%/60% 1 8 2 12m myapp Deployment/myapp 96%/60% 1 8 4 12m myapp Deployment/myapp 31%/60% 1 8  4 13m myapp Deployment/myapp 26%/60% 1 8 4 14m myapp Deployment/myapp 0%/60% 1 8 4 15m myapp Deployment/myapp 0%/60% 1 8 4 17m myapp Deployment/myapp 0%/60% 1 8 3 18m $ kubectl get pods NAME READY STATUS RESTARTS AGE myapp-64bf6764c5-45qwj  0/1 Terminating 0 7m1s myapp-64bf6764c5-72crv 1/1 Running 0 20m myapp-64bf6764c5-gmz6c 1/1 Running 0 8m1sCopy the code

Listing 21.6 HPA

  • See kubectl Explain Hpa.spec for listing definitions
maxReplicas                       <integer>         # Maximum number of pods for automatic scaling
minReplicas                       <integer>         # Automatic scaling of POD number lower limit
scaleTargetRef	                  <Object>          # Other scaling metrics
  apiVersion                      <string>          # Metric API version
  kind                            <string>          # Index type
  name                            <string>          # Available metrics
targetCPUUtilizationPercentage    <integer>         # Evaluate automatic scaling against the target average CPU utilization threshold
Copy the code
  • Sample listing that implements automatic expansion of the POD under the Deployment controller myApp
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa-v2
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 55
  - type: Resource
    resource:
      name: memory
      targetAverageValue: 50Mi
Copy the code

other

Send your notes to: github.com/redhatxl/aw… Welcome one button three links