Related components

The following components are required for an overall ISTIO environment

  • Prometheus — For K8s platform and ISTIO platform monitoring
  • Jaeger – for link tracing between services
  • Elasticsearch — store link tracking data (you can also deploy FlunTD and Kibana to collect and monitor K8s platform logs)
  • Istio — Platform ontology

Neither Prometheus nor Jaeger was integrated into istio deployment packages because of the need to be used in production environments, where Prometheus was reused by other services on the platform; The Jaeger integrated in IStio was in allinone mode (for testing) and could not meet production requirements, so it was pulled out separately.

Prometheus structures,

Prometheus is an open source monitoring alarm system and time sequence database (TSDB) developed by SoundCloud. Prometheus, developed in the Go language, is an open source version of Google’s BorgMon monitoring system.

In 2016, The Cloud Native Computing Foundation under the Linux Foundation, which was launched by Google, included Prometheus as its second open source project and was quite active in the open source community.

Prometheus and Heapster(Heapster is a subproject of K8S for obtaining cluster performance data.) Compared with the function more perfect, more comprehensive. Prometheus is also capable of supporting tens of thousands of clusters.

The basic principle is to capture the status of monitored components periodically through THE HTTP protocol. Any component can access monitoring as long as it provides the corresponding HTTP interface. No SDK or other integration process is required. This is ideal for hypervisor monitoring systems such as VM, Docker, Kubernetes, etc. An HTTP interface that outputs information about a monitored component is referred to as a exporter. At present, most of the components commonly used by Internet companies can be directly used by my exporter, such as Varnish, Haproxy, Nginx, MySQL, Linux system information (including disk, memory, CPU, network, etc.)

The deployment of

Prometheus here uses Kube-Prometheus from CoreOS, which defines a set of CRDS and provides operators for easy deployment.

Git Clone can be deployed mindlessly:

Install CRD and Namespace for Prometheus
kubectl create -f manifests/setup
Check whether the CRD serverMonitor is available
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
Deploy other components
kubectl create -f manifests/
Copy the code

In Kube-Prometheus, the following CRDS are defined:

Commonly used #Prometheusrules.monitoring.coreos.com - servicemonitors.monitoring.coreos.com is used to define the alarm rules - is used to define the metrics for API# not commonly usedPodmonitors.monitoring.coreos.com -- alertmanagers.monitoring.coreos.com -- for the alarm configuration for pod monitoring configuration Prometheuses.monitoring.coreos.com - used to create the adapterCopy the code

After the deployment is successful, the following POD and Service are generated in the Monitoring Namespace

# podNAME READY STATUS RESTARTS AGE alertmanager-main-0 2/2 Running 2 4d17h alertmanager-main-1 2/2 Running 0 4d17h alertmanager-main-2 2/2 Running 0 4d17h grafana-5db74b88f4-sfwl5 1/1 Running 0 4d17h kube-state-metrics-54f98c4687-8x7b7  3/3 Running 0 4d17h node-exporter-5p8b8 2/2 Running 0 4d17h node-exporter-65r4g 2/2 Running 0 4d17h node-exporter-946rm  2/2 Running 0 4d17h node-exporter-95x66 2/2 Running 4 4d17h node-exporter-lzgv7 2/2 Running 0 4d17h prometheus-adapter-8667948d79-hdk62 1/1 Running 0 4d17h prometheus-k8s-0 3/3 Running 1 4d17h prometheus-k8s-1 3/3 Running 1 4d17h prometheus-operator-548c6dc45c-ltmv8 1/1 Running 2 4d17hCopy the code

Pod/Prometheus – K8S-0 refers to the periodic retrieval of K8S platform API to obtain metrics information. Through kubectl get – n monitoring servicemonitors.monitoring.coreos.com for serviceMonitor:

NAME AGE alertmanager 4d17h grafana 4d17h coredns 4d17h | kube-apiserver 4d17h | kube-controller-manager 4d17h | Kube - the scheduler 4 d17h | - > monitoring configuration of k8s related components kube - state - metrics 4 d17h | kubelet 4 d17h | node - exporter 4 d17h | Prometheus 4 d17h | - > component of monitoring configuration on Prometheus itself Prometheus - 4 d17h operator |Copy the code

** Note: ** At this time we have not added the monitoring configuration for the ISITO components

elasticsearch

The role of ES in the environment is to store link tracing information. In the selection of link tracking, we choose Jaeger, and jaeger supports elasticSearch and Cassandra back-end storage, while ES has higher reusability.

Used here is the official image, GCR. IO/fluentd – elasticsearch/elasticsearch: v6.6.1

After successful deployment, service ElasticSearch-Logging is provided

You can query the es cluster status and data ElasticSearch Head in chrome

jaeger

At present, ES has been deployed. For jaeger deployment, you can refer to {% post_link Jaeger-istio Jaeger’s istio practice %} I wrote before, which will not be described here

Through deployment, we get the following three services under the Jaeger namespace:

NAME TYPE cluster-ip external-ip PORT(S) AGE Jaeger-Collector ClusterIP 10.233.15.246 < None > 14267 / TCP, 14268 / TCP, 9411 / TCP 2 d3h jaeger - query LoadBalancer 10.233.34.138 < pending > 80:31882 / TCP 2 d3h zipkin ClusterIP 2 d3h 10.233.53.53 < none > 9411 / TCPCopy the code

Zipkin service is used to collect link information using Zipkin data structure. Jaeger is compatible with Zipkin. Jaeger-collector Collects link information about data structures used by jaeger. Jaeger-query Is used to query information.

istio

Now that all the preparations are in place, ISTIO can finally be deployed. The version isTIO uses is 1.3.4. We choose by Helm chart template installation deployment, you first need to custom deployment parameters, modifying the install/kubernetes/Helm/values. The yaml

Disable the Grafana Prometheus Tracing module, and other modules can be deployed and installed as required
grafana:
  enabled: false

prometheus:
  enabled: false

tracing:
  enabled: false

Tracing tracing Tracing tracing tracing
tracer:
  zipkin: 
    address: zipkin.jaeger:9411  # format for servicename. Namespace: port
Copy the code

Run the following command to deploy

# Add the official HELM repositoryHelm repo add istio. IO https://storage.googleapis.com/istio-release/releases/1.3.4/charts/# Deploy ISTIO CRD
helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system

The result is 23
kubectl get crds | grep 'istio.io' | wc -l

TraceSampling = 100; traceSampling = 100;
# kiali. XXX configures Kiali to connect to external Jaeger and Prometheus
# istio need to copy the deployment of more important components, the galley, for example, the policy, the pilot, telemetry, and so onhelm install install/kubernetes/helm/istio --name istio --namespace istio-system \ --set pilot.traceSampling=100, \ kiali.dashboard.jaegerURL=http://jaeger-query.jaeger:80, \ kiali.prometheusAddr=http://prometheus-k8s.monitoring:9090, \ galley.replicaCount=3, \ mixer.policy.replicaCount=3, \ mixer.telemetry.replicaCount=3, \ pilot.autoscaleMin= 2Copy the code

At this point, ISTIO is deployed and Jaeger is connected to ISTIO, but Prometheus has not configured metrics to collect ISTIO. We need to create our own serviceMonitor:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: istio-monitor  
  labels:
    app: prometheus-istio
spec:
  selector:
    matchLabels: Collect /metrics for pods containing the following tags
      app: mixer Since metrics of the business service components in ISTIO are summarized in Mixer, only that component can be collected
      istio: mixer
  endpoints:
  - port: prometheus The # Mixer component includes a port named Prometheus for easy collection
    interval: 10s    # Acquisition cycle
  namespaceSelector:
    matchNames:
    - istio-system   # scope
Copy the code

In addition, since Prometheus does not have permission to collect endpoints other than Monitoring and Kube-System, rBAC needs to be added to increase its permission:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: istio-system
  labels:
    app: prometheus

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus-istio
  labels:
    app: prometheus
rules:
- apiGroups: ["] ""
  resources:
  - nodes
  - services
  - endpoints
  - pods
  - nodes/proxy
  verbs: ["get", "list". "watch"]
- apiGroups: ["] ""
  resources:
  - configmaps
  verbs: ["get"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus-istio
  namespace: istio-system
  labels:
    app: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus-istio
subjects:
- kind: ServiceAccount
  name: prometheus-k8s
  namespace: monitoring
Copy the code

At this point, the platform is set up and you can test it by deploying the IStio sample program bookinfo

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -n <your namespace>
Copy the code