This article will show you how to build a Java CI pipeline using Argo Workflows to record any potholes you have in the process.
Argo-based CI = GitLab + Webhook +argoEvents+argoWorkflows
If you want to build on Argo’s own platform-level pipeline, you can call Argo’s API directly to achieve flexibility. Please refer to the official documentation github.com/argoproj/ar…
Argo workflows is introduced
Argo Workflows is an open source, container-native workflow engine implemented as Kubernetes CRD that orchestrates parallel jobs on Kubernetes.
Argo Workflows installation
Github.com/argoproj/ar…
Just download the code and use argoCD.
The CI process
Reference:
CI/CD with Argo on Kubernetes
Implemented using Argo on Kubernetes
Build a Java CI pipeline using Argo Workflows
Production line each container common storage volume, convenient intermediate product flow
VolumeClaimTemplates is a list of claims that containers are allowed to reference. The Workflow controller will create the claims at the beginning of the workflow and delete the claims upon completion of the workflow
Copy the code
VolumeClaimTemplates is a set of storage declarations that all containers can access. The workflow controller creates the declarations at the start of a workflow and deletes them at the end.
Please refer to the official documentation reference argoproj.github. IO/argo-workFL…
I added a storageclass that I configured.
volumeClaimTemplates:
- metadata:
name: work
annotations:
volume.beta.kubernetes.io/storage-class: "tmp-nfs-client-storageclass"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Copy the code
Clone code store and configuration store
Add identity information to the request string
https://[username]:[password]@gitee.com/project.git
Copy the code
It’s definitely not safe to write a password
Gitlab can generate tokens as follows:
Then run the following command to access it
git clone https://[username]:[token]@xxxxxx.com/MyUser/MyRepo.git
Copy the code
Code – repo configured to https:// [username] : [token] @ xxxxxx.com/MyUser/MyRepo.git
- name: clone-code inputs: parameters: - name: code-repo - name: code-branch container: volumeMounts: - mountPath: /work name: work image: alpine/git:v2.26.2 work dir: /work # Do a shallow clone, which is the fastest way to clone, by using the # --depth, --branch, and --single-branch options args: - clone # - --depth # - "1" - --branch - "{{inputs.parameters.code-branch}}" - --single-branch - "{{inputs.parameters.code-repo}}" - .Copy the code
Code storehouse or mirror storehouse is a domain name. You can add hostAliases under the spec to resolve the domain name
HostAliases: -ip: "10.114.61.24" Hostnames: - "git.xxxxxx.com" -IP: "10.120.43.49" Hostnames: - "XXXXxx.xxxxxx.com"Copy the code
Maven build
When compiling with Maven, local repositories are configured using settings.xml to speed up the build process. Settings.xml can be placed in a code store, but this is not generic; you need to add a copy to each code store. You can do this by putting settings.xml into the repository, adding a Clone repository step to the Clone repository, and then hanging it somewhere in the build container.
First we define a PVC called workflow-build-PV-claim, and then declare the volume Maven-repo in the SPCE.
volumes:
- name: maven-repo
persistentVolumeClaim:
claimName: workflow-build-pv-claim
Copy the code
Workflow build-PV-claim is a persistent volume that I configured to store third-party packages. The underlying storage uses NFS.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: workflow-build-pv-claim
namespace: argo
spec:
storageClassName: nfs-storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Gi
Copy the code
Then mount the maven-repo volume to a path in the build container, such as /maven-repo.
- name: build
inputs:
parameters:
- name: code-path
container:
image: maven:3-alpine
volumeMounts:
- mountPath: /work
name: work
- mountPath: /config
name: config
- mountPath: /maven-repo
name: maven-repo
workingDir: /work/
command:
- mvn
args:
- --settings=/config/settings.xml
- -B
- -DskipTests
- clean
- deploy
Copy the code
Settings. XML localRepository set to “/maven-repo”
<localRepository>/maven-repo</localRepository>
Copy the code
Kaniko packs the image
Kaniko is one of the Google-built wheels for building Docker images on Kubernetes without privileged mode.
Kaniko does not rely on the Docker Daemon daemon, but executes every command in the Dockerfile entirely in userspace. This allows you to build container images in environments without privileged mode or where Docker Daemons are not running (such as Kubernetes clusters).
We need to push the mirror product to the mirror warehouse. My mirror warehouse is Harbor.
You can create a bot account and then create a secret using the following command
kubectl create secret kaniko-secret docker-registry-creds --docker-server="https://xxxxxxxxxxxx.com" --docker-username='robot$xxxxxxxxxxxx' --docker-password='hghghjhoigtfgiohgtyfuyuhugftybhyuftuy'
Copy the code
Then define the kaniko-Secret volume under spec
volumes:
- name: kaniko-secret
secret:
secretName: kaniko-secret
items:
- key: .dockerconfigjson
path: config.json
Copy the code
Then mount the container to the container path/Kaniko /.docker
- name: image
inputs:
parameters:
- name: path
- name: image
- name: dockerfile
- name: cache-image
container:
image: daocloud.io/gcr-mirror/kaniko-project-executor:debug
volumeMounts:
- name: kaniko-secret
mountPath: /kaniko/.docker
- name: work
mountPath: /work
workingDir: /work
args: ["--dockerfile={{inputs.parameters.dockerfile}}",
"--context=/work",
"--insecure=false",
"--skip-tls-verify=false",
"--destination={{inputs.parameters.image}}"]
Copy the code