Wechat official account: Operation and maintenance development story, author: Jock
What is Argo Workflows?
Argo Workflows is an open source project that provides Kubernetes with a Container-native workflow implemented primarily through Kubernetes CRD.
Features are as follows:
-
Each step of the workflow is a container
-
Model a multi-step workflow as a series of tasks, or use a directed acyclic graph (DAG) to describe the dependencies between tasks
-
Computationally intensive jobs for machine learning or data processing can be easily run in a short period of time
-
CI/CD Pipeline runs on Kubernetes without complex software configuration
The installation
Installing the controller
To install Argo Wordflows, run the following command.
kubectl create ns argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/stable/manifests/quick-start-postgres.yaml
Copy the code
When the installation is complete, the following four pods are generated.
# kubectl get po -n argo
NAME READY STATUS RESTARTS AGE
argo-server-574ddc66b-62rjc 1/1 Running 4 4h25m
minio 1/1 Running 0 4h25m
postgres-56fd897cf4-k8fwd 1/1 Running 0 4h25m
workflow-controller-77658c77cc-p25ll 1/1 Running 4 4h25m
Copy the code
Among them:
-
Argo-server is the Argo server
-
Mino is a warehouse for manufacturing
-
Postgres is the database
-
Workflow -controller is a process controller
Then configure a server ingress to access the UI as follows (I’m using Traefik here) :
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: argo-ui
namespace: argo
spec:
entryPoints:
- web
routes:
- match: Host(`argowork-test.coolops.cn`)
kind: Rule
services:
- name: argo-server
port: 2746
Copy the code
The UI interface is as follows:
Configure a miniO ingress as follows:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: minio
namespace: argo
spec:
entryPoints:
- web
routes:
- match: Host(`minio-test.coolops.cn`)
kind: Rule
services:
- name: minio
port: 9000
Copy the code
The UI is as follows (the default user name and password are admin:password) :
Install the Client side
Argo Workflows provides the Argo CLI, which is simple to install as follows:
# # Download the binary curl - sLO https://github.com/argoproj/argo/releases/download/v3.0.0-rc4/argo-linux-amd64.gz Unzip gunzip argo-linux-amd64.gz # Make binary executable chmod +x argo-linux-amd64 # Move binary to path mv ./argo-linux-amd64 /usr/local/bin/argoCopy the code
After the installation, run the following command to check whether the installation is successful.
# Argo version Argo: v3.0.0-rc4 BuildDate: 2021-03-02T21:42:55z GitCommit: Ae5587e97dad0e4806f7a230672b998fe140a767 GitTreeState: clean GitTag: v3.0.0 - rc4 GoVersion: go1.13 Compiler: gc Platform: linux/amd64Copy the code
Its main commands are:
Submit Create a workflow Watch Monitor a workflow in real time get reality details delete Delete a workflow stop Stop a workflowCopy the code
More commands can be viewed using Argo –help.
You can then use a simple Hello World WorkFlow like this:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-
labels:
workflows.argoproj.io/archive-strategy: "false"
spec:
entrypoint: whalesay
templates:
- name: whalesay
container:
image: docker/whalesay:latest
command: [cowsay]
args: ["hello world"]
Copy the code
Create and observe the Workflow using the following command.
$ argo submit -n argo helloworld.yaml --watch
Copy the code
You can then see the following output.
Name: hello-world-9pw7v Namespace: argo ServiceAccount: default Status: Succeeded Conditions: Completed True Created: Mon Mar 08 14:51:35 +0800 (10 seconds ago) Started: Mon Mar 08 14:51:35 +0800 (10 seconds ago) Finished: Mon Mar 08 14:51:45 +0800 (now) Duration: 10 seconds Progress: 1/1 ResourcesDuration: 4s*(1 CPU),4s*(100Mi memory) STEP TEMPLATE PODNAME DURATION MESSAGE hello-world 9PW7V whalesay hello-world 9PW7V 5sCopy the code
You can also view the status via Argo list, as follows:
# argo list -n argo
NAME STATUS AGE DURATION PRIORITY
hello-world-9pw7v Succeeded 1m 10s 0
Copy the code
Use Argo logs to view specific logs as follows:
# argo logs -n argo hello-world-9pw7v
hello-world-9pw7v: _____________
hello-world-9pw7v: < hello world >
hello-world-9pw7v: -------------
hello-world-9pw7v: \
hello-world-9pw7v: \
hello-world-9pw7v: \
hello-world-9pw7v: ## .
hello-world-9pw7v: ## ## ## ==
hello-world-9pw7v: ## ## ## ## ===
hello-world-9pw7v: /""""""""""""""""___/ ===
hello-world-9pw7v: ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
hello-world-9pw7v: \______ o __/
hello-world-9pw7v: \ \ __/
hello-world-9pw7v: \____\______/
Copy the code
The core concept
Workflow
Workflow is the most important resource in Argo and has two key functions:
-
It defines the workflow to be executed
-
It stores the state of the workflow
The Workflow to be executed is defined in the Workflow.spec field, which includes templates and EntryPoint, as follows:
ApiVersion: argoproj. IO /v1alpha1 kind: Workflow metadata: generateName: hello-world- # Workflow configuration name spec: entrypoint: Whalesay templates: - name: whalesay Templates Image: docker/whalesay Command: [cowsay] args: [" Hello world"]Copy the code
Templates
Templates are list structures that fall into two main categories:
-
Define specific workflows
-
Call other templates to provide parallel control
Define specific workflows
There are four categories for defining specific workflows, as follows:
-
Container
-
Script
-
Resource
-
Suspend
Container
Container, the most common template type, schedules a container whose template specification is the same as K8S’s container specification, as follows:
- name: whalesay
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"]
Copy the code
Script
Script is another wrapper implementation of Container. It is defined in the same way as Container, except that the source field is added for custom scripts, as follows:
- name: gen-random-int
script:
image: python:alpine3.6
command: [python]
source: |
import random
i = random.randint(1, 100)
print(i)
Copy the code
The output of the script is automatically exported to {{tasks.
.elsion.result}} or {{steps.
.elsion.result}} depending on how it is called.
Resource
Resource is used to perform cluster Resource operations on the K8S cluster directly, including get, create, apply, delete, replace, and patch. Create a ConfigMap resource in the cluster as follows:
- name: k8s-owner-reference
resource:
action: create
manifest: |
apiVersion: v1
kind: ConfigMap
metadata:
generateName: owned-eg-
data:
some: value
Copy the code
Suspend
Suspend is used to Suspend operations. You can Suspend operations for a period of time or manually resume operations using Argo resume. Define the format as follows:
- name: delay
suspend:
duration: "20s"
Copy the code
Call other templates to provide parallel control
Calls to other templates also fall into two categories:
-
Steps
-
Dag
Steps
Steps define tasks by defining a series of Steps. The structure of Steps is “List of lists”. External lists are executed sequentially and internal lists are executed in parallel. As follows:
- name: hello-hello-hello
steps:
- - name: step1
template: prepare-data
- - name: step2a
template: run-data-first-half
- name: step2b
template: run-data-second-half
Copy the code
Step1 and Step2a are executed sequentially, while Step2a and step2b are executed in parallel.
You can also use When to make conditional judgments. As follows:
apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: coinflip- spec: entrypoint: coinflip templates: - name: coinflip steps: - - name: flip-coin template: flip-coin - - name: heads template: heads when: "{{steps.flip-coin.outputs.result}} == heads" - name: tails template: tails when: "{{steps.flip-coin.outputs. Result}} == tails" -name: flip-coin script: image: python:alpine3.6 command: [python] source: | import random result = "heads" if random. Randint (0, 1) = = 0 else "tails" print (result) - name: Heads Container: image: alpine:3.6 Command: [sh, -c] args: ["echo "it was Heads \""] -name: tails Container: image: Alpine :3.6 Command: [sh, -c] args: ["echo \" It was tails\""]Copy the code
If you submit this Workflow, it looks like this:
In addition to using When for conditional judgment, you can also loop, as shown in the following code:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: loops-
spec:
entrypoint: loop-example
templates:
- name: loop-example
steps:
- - name: print-message
template: whalesay
arguments:
parameters:
- name: message
value: "{{item}}"
withItems:
- hello world
- goodbye world
- name: whalesay
inputs:
parameters:
- name: message
container:
image: docker/whalesay:latest
command: [cowsay]
args: ["{{inputs.parameters.message}}"]
Copy the code
If you submit the Workflow, the output is as follows:
Dag
Dag is mainly used to define the dependencies of tasks. You can set other tasks to be completed before starting a particular task, and tasks without any dependencies will be executed immediately. As follows:
- name: diamond
dag:
tasks:
- name: A
template: echo
- name: B
dependencies: [A]
template: echo
- name: C
dependencies: [A]
template: echo
- name: D
dependencies: [B, C]
template: echo
Copy the code
Where A is executed immediately, B and C depend on A, and D on B and C.
Then run an example to see what it looks like:
apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: dag-diamond- spec: entrypoint: diamond templates: - name: diamond dag: tasks: - name: A template: echo arguments: parameters: [{name: message, value: A}] - name: B dependencies: [A] template: echo arguments: parameters: [{name: message, value: B}] - name: C dependencies: [A] template: echo arguments: parameters: [{name: message, value: C}] - name: D dependencies: [B, C] template: echo arguments: parameters: [{name: message, value: D}] - name: echo inputs: parameters: - name: Message container: image: alpine: 3.7 the command: [echo, "{{inputs. The parameters. The message}}"]Copy the code
Submitted to the workflow.
argo submit -n argo dag.yam --watch
Copy the code
image.png
Variables
Argo Workflow allows you to use variables as follows:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-parameters-
spec:
entrypoint: whalesay
arguments:
parameters:
- name: message
value: hello world
templates:
- name: whalesay
inputs:
parameters:
- name: message
container:
image: docker/whalesay
command: [ cowsay ]
args: [ "{{inputs.parameters.message}}" ]
Copy the code
Define arguments in the spec field to define the variable message, whose value is Hello world. Then in the templates field you need to define an inputs field for templates, and reference the variables in the “{{}}” form.
Variables can also perform some functional operations, mainly:
-
Filter: filter
-
AsInt: Converts to Int
-
AsFloat: Converts to Float
-
String: Converts to string
-
ToJson: convert toJson
Example:
filter([1, 2], { # > 1})
asInt(inputs.parameters["my-int-param"])
asFloat(inputs.parameters["my-float-param"])
string(1)
toJson([1, 2])
Copy the code
More syntax can be found at github.com/antonmedv/e…
Products library
When you install Argo, you already have mino installed as the artifact library, so how to use it?
Here’s an official example:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: artifact-passing-
spec:
entrypoint: artifact-example
templates:
- name: artifact-example
steps:
- - name: generate-artifact
template: whalesay
- - name: consume-artifact
template: print-message
arguments:
artifacts:
- name: message
from: "{{steps.generate-artifact.outputs.artifacts.hello-art}}"
- name: whalesay
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["sleep 1; cowsay hello world | tee /tmp/hello_world.txt"]
outputs:
artifacts:
- name: hello-art
path: /tmp/hello_world.txt
- name: print-message
inputs:
artifacts:
- name: message
path: /tmp/message
container:
image: alpine:latest
command: [sh, -c]
args: ["cat /tmp/message"]
Copy the code
It is divided into two steps:
-
First produce the artifact
-
And then get the artifact
Submit the Workflow and the result is as follows:
Then in minio you can see the generated artifact, compressed as follows:
WorkflowTemplate
WorkflowTemplate is a Workflow template that can be referenced from within the WorkflowTemplate or from other WorkflowTemplates on the cluster.
WorkflowTemplate/template
-
Templates is just a task under Templates in Workflow. When you define a Workflow, you need to define at least one template
-
The WorkflowTemplate is a definition of the Workflow that resides in the cluster. It is a definition of the Workflow because it contains templates that can be referenced from within the WorkflowTemplate or from other WorkflowTemplates and workflowTemplates on the cluster.
After version 2.7, the WorkflowTemplate definition is the same as the WorkflowTemplate definition. We can simply change the Kind :Workflow definition to kind:WorkflowTemplate. Such as:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: workflow-template-1
spec:
entrypoint: whalesay-template
arguments:
parameters:
- name: message
value: hello world
templates:
- name: whalesay-template
inputs:
parameters:
- name: message
container:
image: docker/whalesay
command: [cowsay]
args: ["{{inputs.parameters.message}}"]
Copy the code
Create the WorkflowTemplate as follows
argo template create workflowtemplate.yaml
Copy the code
Then reference it in Workflow as follows:
apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-template-hello-world- spec: Entrypoint: Whalesay templates: - name: Whalesay steps: Workflowtemplate-1: WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate = WorkflowTemplate # parameters: -name: message value: "Hello world"Copy the code
ClusterWorkflowTemplate
The ClusterWorkflowTemplate creates a cluster-wide WorkflowTemplate that can be referenced by other workflows.
Define a ClusterWorkflow as follows.
apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
metadata:
name: cluster-workflow-template-whalesay-template
spec:
templates:
- name: whalesay-template
inputs:
parameters:
- name: message
container:
image: docker/whalesay
command: [cowsay]
args: ["{{inputs.parameters.message}}"]
Copy the code
Then reference it using templateRef in Workflow as follows:
apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: workflow-template-hello-world- spec: Entrypoint: Whalesay templates: -name: Whalesay steps: -- name: call-whalesay-template templateRef: # Cluster-workflow-template-whalesay-template # ClusterWorkflow template # clusterScope: cluster-workflow-template-whalesay-template Parameters: -name: message value: "Hello world"Copy the code
practice
The basic theoretical knowledge of Argo is briefly described above. More theoretical knowledge can be learned on the official website.
Let’s use a simple CI/CD practice to see what you can do with Argo Workflow.
The entire CI/CD process is simple: pull code -> compile -> build image -> Upload image -> deploy.
Define a WorkflowTemplate as follows:
apiVersion: argoproj.io/v1alpha1 kind: WorkflowTemplate metadata: annotations: workflows.argoproj.io/description: | Checkout out from Git, build and deploy application. workflows.argoproj.io/maintainer: '@ joker' workflows. Argoproj. IO/tags: Java, git workflows. Argoproj. IO/version: '> = 2.9.0' name: the conversation - Java spec: entrypoint: main arguments: parameters: - name: repo value: gitlab-test.coolops.cn/root/springboot-helloworld.git - name: branch value: master - name: image value: registry.cn-hangzhou.aliyuncs.com/rookieops/myapp:202103101613 - name: cache-image value: registry.cn-hangzhou.aliyuncs.com/rookieops/myapp - name: dockerfile value: Dockerfile - name: devops-cd-repo value: gitlab-test.coolops.cn/root/devops-cd.git - name: gitlabUsername value: devops - name: gitlabPassword value: devops123456 templates: - name: main steps: - - name: Checkout template: Checkout - - name: Build template: Build -- name: BuildImage template: BuildImage -- name: Deploy template: Deploy # Image: registry.cn-hangzhou.aliyuncs.com/rookieops/maven:3.5.0-alpine workingDir/work: the command: - sh source: | git clone --branch {{workflow.parameters.branch}} http://{{workflow.parameters.gitlabUsername}}:{{workflow.parameters.gitlabPassword}}@{{workflow.parameters.repo}} . VolumeMounts: -mountpath: /work name: work # compile package name: Build script: image: Registry.cn-hangzhou.aliyuncs.com/rookieops/maven:3.5.0-alpine workingDir/work: the command: - sh source: mvn -B clean package -Dmaven.test.skip=true -Dautoconfig.skip volumeMounts: - mountPath: /work name: Name: BuildImage volumes: -name: docker-config secret: secretName: docker-config Container: image: Registry.cn-hangzhou.aliyuncs.com/rookieops/kaniko-executor:v1.5.0 workingDir/work: the args: - --context=. - --dockerfile={{workflow.parameters.dockerfile}} - --destination={{workflow.parameters.image}} - --skip-tls-verify - --reproducible - --cache=true - --cache-repo={{workflow.parameters.cache-image}} volumeMounts: -mountPath: /work name: work-name: docker-config mountPath: /kaniko/. Docker / # Deploy - name: Deploy script: image: Registry.cn-hangzhou.aliyuncs.com/rookieops/kustomize:v3.8.1 workingDir/work: the command: - sh source: | git remote set-url origin http://{{workflow.parameters.gitlabUsername}}:{{workflow.parameters.gitlabPassword}}@{{workflow.parameters.devops-cd-rep o}} git config --global user.name "Administrator" git config --global user.email "[email protected]" git clone http://{{workflow.parameters.gitlabUsername}}:{{workflow.parameters.gitlabPassword}}@{{workflow.parameters.devops-cd-rep o}} /work/devops-cd cd /work/devops-cd git pull cd /work/devops-cd/devops-simple-java kustomize edit set image {{workflow.parameters.image}} git commit -am 'image update' git push origin master volumeMounts: - mountPath: /work name: work volumeClaimTemplates: - name: work metadata: name: work spec: storageClassName: nfs-client-storageclass accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1GiCopy the code
Description:
1. Use Kaniko to create an image. There is no need to mount Docker.sock, but config.json is required for pushing the image.
kubectl create secret generic docker-config --from-file=.docker/config.json -n argo
Copy the code
2. Prepare the storageClass, of course, you can also use empty without using it, but you can persist the cache file and speed up the build (I didn’t do this above). Create a WorkflowTemplate.
argo template create -n argo devops-java.yaml
Copy the code
After the WorkflowTemplate is created, you can see it in the UI as follows:
Create Workflow. You can create Workflow manually as follows:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: workflow-template-devops-java-
spec:
workflowTemplateRef:
name: devops-java
Copy the code
Or you can go to the UI and say Create, so I’m going to go to the UI and say Create. Select the WorkflowTemplate you just created and click Create as follows:A Workflow is generated that looks like this:Click in to see the details of each step, as followsClick on each specific step to see the log, as follows:You can also view the Workflow execution result on the CLI as follows:This is the end of the first use, and later will be gradually optimized.
Reference documentation
-
Github.com/argoproj/ar…
-
Argoproj. Making. IO/Argo – workfl…
-
Github.com/antonmedv/e…
-
Github.com/argoproj/ar…
Public account: Operation and maintenance development story
Making:Github.com/orgs/sunsha…
Love life, love operation
If you think the article is good, please click on the upper right corner and select send to friends or forward to moments. Your support and encouragement is my biggest motivation. Please follow me if you like