Key technologies:
- Kubernetes: distributed container orchestration and scheduling system, currently recognized as cloud native technology infrastructure
- Jaeger: Open source end-to-end distributed tracking system
- Helm: Package manager in Kubernetes
background
Kubernetes has become the recognized infrastructure of the whole cloud native. With the rapid development of Internet business, more and more businesses will choose to use the cloud native way for development and delivery from the beginning, and Helm is the tool used to deliver the environment in the whole process. Just like in the past, We package each service into tar or RPM packages and deliver them to different environments for deployment.
However, the natural distributed and microservice characteristics of cloud native applications make it difficult for us to have insight into business problems in the process of business operation. For SRE, there is no question of availability and stability. Therefore, for cloud native system, a reliable distributed application monitoring and link tracking system is particularly important.
Of course, there are many similar distributed application monitoring and tracking systems, such as SkyWaling, Zipkin, Cat, Pinpoint.
Using Jaeger and being relatively familiar with the Golang language led to this article.
Technical Point Introduction
Helm and Jaeger are the key to quickly building a highly available distributed tracking system.
Helm
Helm provides teams with the tools they need to collaborate on creating, installing, and managing applications within Kubernetes, somewhat similar to APT in Ubuntu or YUM in CentOS
For the operator, the main functions are as follows:
- Find the pre-packaged software to install and use (Chart)
- Easily create and host your own software packages
- Install the package into any K8s cluster
- Query the cluster to see installed and running packages
- Update, delete, roll back, or view the history of installed software packages
Core components:
- Helm: Helm is a command-line client tool. It is mainly used for the creation, packaging, and publishing of Kubernetes application Chart as well as the creation and management of local and remote Chart repositories
- Chart: Software package for Helm, in TAR format. The APT DEB package or YUM RPM package contains a set of YAML files that define Kubernetes resources.
- Repoistory: Helm’s software Repository, Repository is essentially a Web server that holds a list of Chart packages for users to download and provides a list of Chart packages for that Repository to query. Helm can manage multiple different repositories simultaneously.
- Release: Chart deployed in the Kubernetes cluster using the helm install command is called Release. This can be interpreted as an example of an application for Helm deployed using the Chart package.
Jaeger
Jaeger is an open source distributed tracking system published by Uber Technologies, inspired by Dapper and OpenZipkin. It is used to monitor and troubleshoot microservices-based distributed system problems, including:
- Distributed context propagation
- Distributed transaction monitoring
- Returning for analysis
- Service dependency analysis
- Performance/latency optimization
Jaeger architecture:
Overall, a basic Jaeger tracking system consists of the following parts:
- Jaeger-query: Used for client-side query and retrieval components and contains a basic UI
- Jaeger-collector: Receives trace data from jaeger-agent and executes it through the processing pipeline. The current processing pipeline includes validating trace data, creating indexes, performing data transformations, and storing the data to the corresponding back end
- Jaeger-agent: A network daemon that listens for SPANS sent over UDP, which it batches and sends to the collector. It is designed to be deployed to all hosts as an infrastructure component. The proxy abstracts the routing and discovery of the collector from the client
- Backend-storage: a pluggable back-end storage for indicator data storage that supports Cassandra, Elasticsearch, and Kafka
- Ingester: Optional component for consuming data from Kafka and writing it to a directly readable Cassandra or Elasticsearch store
Note: Trace and SPAN are specialized in link tracing system. Span refers to a specific unit of work and contains the metadata of the unit, such as the specific operation and execution time, while Trace refers to the data path through the whole system. You can think of Spans as a directed acyclic graph (DAG)
For a multi-component system, there are also multiple network ports for data transmission. The following figure shows the port usage of each component:
Jaeger environment quickly initialized
Note:
- Es is currently selected as the back-end storage, version 6 is required (because the SDK of ES-Go is strongly version-dependent, the latest version of Jaeger only supports ES6)
- Es uses external real example, because stateful service cannot maintain data state temporarily in K8S cluster, so it is convenient for production environment to directly replace independent ES cluster instance
Install helm V3 and basic use
$wget https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz $tar -zxvf helm-v3.1.2-linux-amd64.tar.gz $CD Linux-amd64 $ Mv helm /usr/local/bin/$helm --help # add stable for helm https://kubernetes-charts.storage.googleapis.com/ $helm repo add bitnami https://charts.bitnami.com/bitnami list $# to check the warehouse $helm Lint chart # yaml template render $helm Template chart # chart deploy (create service release) $helm installCopy the code
Download Jaeger Helm Chart
Sh # # git clone https://gitee.com/BGBiao/sample-jaeger.git directory structure - 4.2 # CD sample - jaeger/sh # 4.2 ls Chart. The lock charts Chart. Yaml es files readme. Md templates values. The yaml sh tree - L - 4.2 # 2.. | -- Chart. The lock | - charts | -- Chart. Yaml | - es | `-- elasticsearch.yml |-- readme.md |-- templates | |-- agent-deploy.yaml | |-- agent-svc.yaml | |-- collector-deploy.yaml | |-- collector-svc.yaml | |-- configmap.yaml | |-- elasticsearch-secret.yaml | |-- _helpers.tpl | |-- hotrod-deploy.yaml | |-- hotrod-ingress.yaml | |-- hotrod-svc.yaml | |-- query-deploy.yaml | `-- query-svc.yaml `-- values.yamlCopy the code
The ElasticSearch back-end storage is ready
$docker run --name jaeger-es-itd -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" - v $PWD/es/elasticsearch. Yml: / usr/share/elasticsearch/config/elasticsearch. Yml elasticsearch: 6.4.3 $curl localhost: 9200 {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication token for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication token for REST request / / ", "header" : {" WWW - Authenticate ":" Basic realm = \ "security \" charset = \ "utf-8 \" "}}, "status" : 401} # because es opens the certification, Sh -4.2# docker exec it jaeger-es-tmp bash./bin/elasticsearch-setup-passwords Passwords Sets the passwords for reserved users Commands -------- auto - Uses randomly generated passwords interactive - Uses passwords entered by a user Non-option arguments: command Option Description ------ ----------- -h, --help show help -s, --silent show minimal output -v, --verbose show verbose output ERROR: [root@b90822f2c26b elasticSearch]#./bin/elasticsearch-setup-passwords interactive Initiating the setup of passwords for reserved users elastic,kibana,logstash_system,beats_system. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Reenter password for [elastic]: Enter password for [kibana]: Reenter password for [kibana]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Enter password for [beats_system]: Reenter password for [beats_system]: Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [beats_system] Changed password for user [elastic] # $curl -u 'elastic: XXXXXXX 'http://localhost:9200 {"name" : "1xdrvFl", "cluster_name" : "docker-cluster", "cluster_uuid" : "CO76XmP0Qbexysev4Al2MQ", "version" : { "number" : "" build_flavor 6.4.3", ":" default ", "build_type" : "tar", "build_hash" : "fe40335", "build_date" : "2018-10-30T23:17:19.084789z ", "build_snapshot" : false, "lucene_version" : "Minimum_index_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0"}, "tagline" : "You Know, for Search" }Copy the code
Modify Chart configuration and install Release
# modified helm chart of es address storage. Elasticsearch. Host/storage. Elasticsearch. Password # # to deploy the jaeger services sh - 4.2 ls chart. The lock Yaml sh-4.2# helm install -n sample-jaeger jaeger. Error: create: failed to create: Namespaces "sample-jaeger" not found sh-4.2# sh-4.2# kubectl create NS sample-jaeger namespace/sample-jaeger created Sh -4.2# helm install -n sample-jaeger DEPLOYED: Sun May 23 23:42:58 2021 NAMESPACE: DEPLOYED sample-jaeger STATUS: deployed REVISION: 1 TEST SUITE: None sh-4.2# helm list -n sample-Jaeger NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION Jaeger sample-jaeger 1 2021-05-23 23:42:58.75504002 +0800 CST Deployed sample-jaeger-0.1.0 1.16.0 sh-4.2# kubectl get all-n sample-jaeger NAME READY STATUS RESTARTS AGE pod/jaeger-agent-579587cb85-rzd46 1/1 Running 0 27s pod/jaeger-collector-57c57bc788-682sn 1/1 Running 0 27s pod/jaeger-query-85d86d44d-dhj7g 1/1 Running 0 27s pod/jaeger-sample-jaeger-jaeger-example-7bf65bd75b-mlxhv 1/1 Running 0 27s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Service/jaeger - agent NodePort 172.16.127.123 < none > 6831:31139 / UDP, 5775:31355 / TCP, 6832:30937 / TCP, 5778:32360 / TCP 27 s Service/jaeger - agent - backup NodePort 172.16.125.116 < none > 6831:30036 / UDP, 5775:31138 / TCP, 6832:31179 / TCP, 5778:30568 / TCP 27S service/jaeger-collector NodePort 172.16.127.93 < None > 14250:30708/TCP,14268:32354/TCP,9411:32312/TCP,14267:30986/TCP 27s service/jaeger-collector-backup NodePort 172.16.125.137 < none > / TCP, 14250-30477, 14268:32513 / TCP, 9411:30679 / TCP, 14267:31237 / TCP 27 s service/jaeger - query NodePort 172.16.124.251 < none > / TCP, 16686-30843, 16687-30530 / TCP 27 s service/jaeger - query - backup NodePort 172.16.125.214 < none > 16686:30273 / TCP, 16687:30858 / TCP 27 s service/jaeger - sample - jaeger jaeger - example NodePort 172.16.125.35 < none > 80:31255/TCP 27s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/jaeger-agent 1/1 1 1 27s deployment.apps/jaeger-collector 1/1 1 1 27s deployment.apps/jaeger-query 1/1 1 1 27s deployment.apps/jaeger-sample-jaeger-jaeger-example 1/1 1 1 27s NAME DESIRED CURRENT READY AGE replicaset.apps/jaeger-agent-579587cb85 1 1 1 27s replicaset.apps/jaeger-collector-57c57bc788 1 1 1 27s replicaset.apps/jaeger-query-85d86d44d 1 1 1 27s replicaset.apps/jaeger-sample-jaeger-jaeger-example-7bf65bd75b 1 1 1 HTTP: 172.16.124.251:16686 -i HTTP/1.1 200 OK Content-type: text/ HTML; charset=utf-8 Date: Sun, 23 May 2021 15:44:39 GMT # when find the core of our services are no problem (agent, the collector, the query) # next can directly visit our jaeger service page # for our service If nodePort is used, you can directly access the IP address of the host :portCopy the code
Jaeger system use
Note: by default we don’t have the application of data collection, so don’t see any link related data, in fact you can see, we also deployed a test procedure above jaeger – example, it is a simple but complete program, through the front end, database caching for the whole process of logic, can make us clear to see the whole call link.
Jaeger-example = jaeger-example = jaeger-example node:31255
As you can see, the entire service provides a global perspective, starting with the user’s client ID entering the request to simulate the flow of service requests throughout the process, and then clicking on several service modules at will to have the service proactively report some data to Jaeger
After simulating some user requests, we can look at some metrics of the service
As you can see, there are five different Traces generated for each of the five hotROD requests
Then, you can click any link to view the time consumption of the entire invocation process
From the method call of call link, call timeout and abnormal module can also be quickly viewed
You can then trace the data in the upper right corner to see, for example, the call relationships, which makes it easy to see the individual call relationships of a complete service
At this point, we can see that we quickly deployed a distributed Jaeger cluster using helm
Of course, the hotROD service is just a test service to verify that our Jaeger monitoring is available, and then we need to destroy it to free the associated resources
Modify Helm Chart update service
Note: Since our services are defined by YAML in the chart, we only need to change yamL for service management.
Run the following command to change jaegerexample. enabled: false in values.yaml
# Make a configuration file, Sh -4.2# helm upgrade -n sample-jaeger -f./values.yaml jaeger. Happy Helming! NAME: jaeger LAST DEPLOYED: Mon May 24 00:18:46 2021 NAMESPACE: sample-jaeger STATUS: deployed REVISION: 2 TEST SUITE: None # as you can see, # helm List -n sample- Jaeger NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION Jaeger Sample-Jaeger 2 2021-05-24 00:18:46.865941544 +0800 CST Deployed sample-Jaeger-0.1.0 1.16.0 Sh -4.2# kubectl get all-n sample-jaeger NAME READY STATUS RESTARTS AGE pod/ jaeger-agent-579587CB85-rzd46 1/1 Running 0 38m pod/jaeger-collector-57c57bc788-682sn 1/1 Running 0 38m pod/jaeger-query-85d86d44d-dhj7g 1/1 Running 0 38m NAME TYPE Cluster-ip external-ip PORT(S) AGE service/jaeger-agent NodePort 172.16.127.123 < None > / UDP, 6831-31139, 5775:31355 / TCP, 6832:30937 / TCP, 5778:32360 / TCP 38 m service/jaeger - agent - backup NodePort 172.16.125.116 < none > / UDP, 6831-30036, 5775:31138 / TCP, 6832:31179 / TCP, 5778:30568 / TCP 38 m service/jaeger - collector NodePort 172.16.127.93 <none> 14250:30708/TCP,14268:32354/TCP,9411:32312/TCP,14267:30986/TCP 38m service/jaeger-collector-backup NodePort 172.16.125.137 < none > / TCP, 14250-30477, 14268:32513 / TCP, 9411:30679 / TCP, 14267:31237 / TCP 38 m service/jaeger - query NodePort 172.16.124.251 < none > / TCP, 16686-30843, 16687:30530 / TCP 38 m service/jaeger - query - backup NodePort 172.16.125.214 < none > 16686:30273/TCP,16687:30858/TCP 38m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/jaeger-agent 1/1 1 1 38m deployment.apps/jaeger-collector 1/1 1 1 38m deployment.apps/jaeger-query 1/1 1 1 38m NAME DESIRED CURRENT READY AGE replicaset.apps/jaeger-agent-579587cb85 1 1 1 38m replicaset.apps/jaeger-collector-57c57bc788 1 1 1 38m replicaset.apps/jaeger-query-85d86d44d 1 1 1 38mCopy the code
Thus, we use helm to elegantly manage the deployment and instance management of our services.
Finally, have we noticed that there may be several problems with using HELM directly:
-
- Namespaces cannot be declared and cannot be created automatically
-
- The kubectl apply command is not supported to directly update the configuration of the corresponding resource
-
- Environment differentiation and chart version control cannot be achieved
Yaml file to help users manage and maintain a large number of HELM charts, improve deployment observability and repeatability, distinguish the environment, and eliminate all kinds of set parameters. Those of you who are interested can try it.