In the last article, we mentioned that everything on Kubernetes is a resource. We deployed the HelloDevOps application by creating three resources: PODS (Deployment), Service, and Ingress. A simple application will have three YAML files, but in practice, there will be more than one application (micro-service application), and then there will be more than ten, even dozens of YAML files. If you use kubectl to create and delete the corresponding resources for each deployment, In this case, Kubernetes provides Helm to make the whole process simple.
Helm is introduced
As the website says, Helm is the packagemanager for Kubernetes and Helm is the best way to find,share,and use software built for Kubernetes. As the name suggests, Helm is the package manager for Kubernetes, just as apt-get is with Ubuntu and Yum with RedHat and CentOS.
HELM Fundamentals
HELM is a C/S model, which consists of two parts: HELM client (HELM) and server (TILLER, taking version 2.x as an example, with 3.x excluding TILLER). The Helm client is a CLI, while Tiler is deployed in a deployment form under the Kube-System namespace of the Kubernetes platform.
Helm accepts Helm Install, Helm Delete, Helm Upgrade and Helm LS from releases generated by users on Helm Charts; Tiller is mainly used to interact with Kubernetes API Server (because for Kubernetes, the entry of all requests is the API Server), so as to realize adding, deleting, modifying and searching of Kubernetes resources. In addition, Tiller can store and manage releases that have already been created.
When the user needs to use a chart to generate a Release through install, Helm client accepts the install request and sends the request to the Tiller server. Tiller server will render relevant files in chart according to values defined in values.yaml or parameters specified after –set, and then send the rendered files to Kubernetes API server to generate corresponding resources, and finally complete the creation of a Release.
Helm installation
Since Helm has two parts, a client and a server, the installation is a two-step process.
Step 1: Install the HELM client
Helm client can be installed in source code or binary code. In this paper, binary is used for installation (binary installation is more convenient, so I personally recommend it). First, in the GitHub page of Helm client Release, find the version that matches your OS and download it. Unzip the downloaded file and copy the extracted binary Helm to the target path (such as /usr/local/bin/Helm). The Helm help command can be used to verify that the Helm client was successfully installed.
Step 2: Create a Service Account
Since in Kubernetes, Control rights to resources are achieved through RBAC(role-based Access Control), while Tiller is in the Kube-system namespace, However, the Release managed by Tiller is under another namespace. In this case, we need to create a Service Account for Tiller with the corresponding permissions. This Service Account is bound to the ClusterRole cluster-admin, which ensures that Tiller can list, delete, and update resources in the namespace of the entire cluster.
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
After creating the Service Account, you can see the details of the Service Account under kube-system ns
$ kubectl -n kube-system get sa | grep tillertiller 1 115d
Step 3: Helm server installation
With the Service Account(tiller) generated in the previous step, you can execute the Helm init –service-account tiller command to complete the installation of the tiller server. This process starts with validation of the Helm local environment, followed by a connection to the kubectl of the Kubernetes Cluster, and if the connection is successful, a tiller related deployment is deployed in the Kube-system namespace situation. You can see it under the kube-system namespace.
$kubectl -n kube-system get deploy,pods | grep tiller
deployment.extensions/tiller-deploy 1 1 1 1 115d
pod/tiller-deploy-6f6fd74b68-tzhcv 1/1 Running
0 2d21h
Perform helm version
$ helm version Client: & version. Version {SemVer: "v2.11.0 GitCommit:" 2 e55dbe1fdb5fdb96b75ff144a339489417b146b GitTreeState: "clean"} Server: & version. Version {SemVer: "v2.11.0 GitCommit:" 2 e55dbe1fdb5fdb96b75ff144a339489417b146b GitTreeState: "clean"}
At this point, HELM client and server are installed. Now we can use Helm to deploy the applications from the previous article, with a few Helm-related concepts to begin with.
Some basic concepts of Helm
Chart
A package is required to deploy an application. The package consists of a set of files that describe the Kubernetes-related resources (deploy, service, ingress, etc.) that can be created on Kubernetes
Release
An instance created and generated by Helm Chart (typically a micro-service application deployed)
Repoistory
The Repository for Helm Charts, like Docker Registry Repository, is used to store Helm chart packages. Like Jfrog ArtiFactory.
HELM Deployment Applications
First, we create a chart named devops with the Helm create devops command to see what files are included
$tree devops devops - - Chart. Yaml - - charts. Yaml - - templates │ ├─ Notes.txt │ ─ _helpers ├─ Ingress. YAML ├─ Service. YAML ├─ Values. YAML
First, a brief introduction to the role of some of the above files
Chart.yaml
This file is mainly used to describe the chart information, including version number, maintainer information, etc.
charts
This is a directory that contains the main Chart packages that the Chart depends on.
templates
This is a directory, which contains the main content of chart, you can see a deployment. The yaml, ingress. Yaml, service. Yaml, these three files (chart created, by default only these three resources definition file, but this does not mean that there can only be the three files, If relevant resources such as secret and configmap are used in the process of deploying the application, they can also be written in this directory, which can be seen in the more and more files. Helm’s convenience and speed can be shown in more and more files. Service and other Kubernetes resources related information.
_helpers.tpl
This is a template file that defines the languages used for some of the templates.
value.yaml
This is the last and most important document to introduce. Before we introduce this file, let’s take a look at the contents of the service-yaml file, which will help us understand the values.yaml file better.
$ cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "devops.fullname" . }}
labels:
app: {{ .Release.Name }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ .Release.Name }}
You can see a lot of “{{}}” enclosed above, which is the variable defined in the Helm Chart. Go Template Language is used in Helm Chart’s writing. Package Template mainly implements some data-driven modules, which can be used to generate some text output. The most intuitive way to think about Template is like this
The contents of these brackets can be interpreted as “parameters” and the values for “arguments” are taken from the values.yaml file. In fact, if you open the other files in the chart directory one by one, such as Deployment. Yaml, When ingress.yaml is opened, you will see a situation similar to that of service.yaml. This is where the importance of the values.yaml file comes in. What deployment,ingress, and service will look like will depend on how it is defined in the values.yaml file.
Moreover, another convenience of Helm Chart is that a single chart can deploy multiple releases, which in turn can be used for multiple different environments, such as Dev environment, SVT environment, and Prod environment. All this change requires is a VALUES.YAML file. For example, if you write the information related to the dev environment in a value-dev.yaml file, the environment for the created Release will be the dev environment. SVT and PROD can be created in the same way. Among these variables, enclosed in “{{}}”, there are a few global variables that you should pay attention to:
.Chart.Name
Chart, devops chart, devops chart, devops chart, devops chart The value of chart.name is devops.
.Release.Name
The name of a Release created by chart, such as a hello-devops Release generated by the above devops chart install command. The release. Name value is hello-devops.
.Release.Namespace
The namespace of the Release created by Chart, such as the hello-devops Release created by the above DevOps chart under the DevOps namespace, The value of.release. Namespace is devops. We will write the “arguments” from the deployment, YAML, service.YAML, ingress.YAML files in the previous article to the values.YAML file
$ cat values.yaml
replicaCount: 1
env:
running_env: dev
name: devops
client: helm
server: tiller
version: v1
image:
repository: dllhb/go_test
tag: v1.3
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 80
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
path: /
hosts:
- devops.test.com
tls:
- secretName: devops-tls
hosts:
- devops.test.com
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
The contents of the Deployment. YAML file are as follows
$ cat templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
version: non-canary
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 9999
protocol: TCP
env:
{{- include "devops.env" . | nindent 12}}
The contents of the service.yaml file are as follows
$ cat templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "devops.fullname" . }}
labels:
app: {{ .Release.Name }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ .Release.Name }}
The contents of the ingress.yaml file are as follows
$ cat templates/ingress.yaml
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "devops.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/instance: {{ .Release.Name }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}
Now you can go to the devops directory and use helm install to install the application.
$ helm install . --name devops --namespace devops -f values.yaml
NAME: devops
LAST DEPLOYED: Sun Aug 4 16:21:03 2019
NAMESPACE: devops
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME AGE
devops 4s
==> v1beta2/Deployment
devops 4s
==> v1beta1/Ingress
devops 4s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
devops-5f499c48b4-rzldr 1/1 Running 0 4s
NOTES:
1. Get the application URL by running these commands:
https://devops.test.com/
View the POD, Service, and Ingress resources
$ kubectl -n devops get pods,svc,ing
NAME READY STATUS RESTARTS AGE
pod/hello-devops-78dd67d9cc-jpx2z 1/1 Running 0 2d22h
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/hello-devops-svc ClusterIP 172.21.246.13 <none> 8888/TCP 14d
NAME
HOSTS
ADDRESS
PORTS AGE
ingress.extensions/hello-devops-ingress devops.test.com 10.208.70.22 80, 443 14d
You can then use https://devops.test.com to access the application
$ curl -k -s https://devops.test.com
Hello DevOps, pod name is hello-devops-78dd67d9cc-jpx2z
In addition, you can use Helm ls to view the releases that have been deployed
$helm ls | grep enterprise enterprise 1 Thu Jun 6 10:13:02 2019 DEPLOYED the conversation - 0.1.0 from 1.0 the conversation
If you want to remove releases that have already been installed, you can execute HelmDeleteDevops –purge
$ helm delete devops --purgerelease "devops" deleted
Source: Devsecops Sig
Author: Little horse brother
Disclaimer: The article was forwarded on IDCF public account (devopshub) with the authorization of the author. Quality content to share with the technical partners of the Sifou platform, if the original author has other considerations, please contact Xiaobian to delete, thanks.
Every Thursday in July at 8 PM, “Dong Ge has words” R & D efficiency tools special, public account message “R & D efficiency” can be obtained address
- “Azure DevOps Toolchain” by Zhou Wenyang, former Microsoft, July 8, 2016
- “Practice of Effectiveness Improvement in Complex Research and Development Collaboration Models”, Aliyun Chen, July 15
- July 22, Polar Fox (GITLAB)- Yang “Infrastructure as code automation test exploration”
- July 29, Bytedance – Xianbin Hu, “Automated Testing: How to Be Offensive and Defensive”