Through the previous two articles, we have a “cry for food” K8s cluster environment, but also related to the concept and components have a basic understanding (early impression of the concept, because only practice can have a deep understanding of it, the so-called “paper come zhongjue shallow, must know this to practice”), This paper introduces how to combine the commonly used Gitlab and Jenkins to realize the automatic deployment of the project through K8s from the practical point of view. Examples will include the server-side project based on SpringBoot and the Web project based on vue.js.
The tools and techniques covered in this article include:
- Gitlab – a common source code management system
- Jenkins, Jenkins Pipeline — A commonly used automated build and deployment tool, a Pipeline organizes the various steps of build and deployment in a pipelined manner
- Docker, Dockerfile — Container engine, all applications will eventually run in a Docker container, Dockerfile is a Docker image definition file
- Kubernetes – Google’s open source container Choreography management system
- Helm — Kubernetes package management tool, similar to Linux yum, APT, or Node NPM package management tools, can organize Kubernetes applications and related dependent services in the form of packages (Chart) management
Environmental Background:
- Gitlab has been used for source management, and the source code has been set up as develop (corresponding to development environment), pre-release (corresponding to test environment), and Master (corresponding to production environment) branches according to different environments
- Jenkins service has been set up
- Existing Docker Registry service for Docker image storage (based on Docker Registry or Harbor self-built, or using cloud service, ali Cloud container image service is used in this paper)
- A K8s cluster has been set up
Expected effect:
- Applications are deployed in different environments. The development, test, and production environments are deployed in different namespaces in the same cluster or in different clusters (for example, the development and test environments are deployed in different namespaces in the local cluster, and the production environment is deployed in the cloud cluster).
- The configuration should be as general as possible, and the automatic deployment of new projects can be completed by modifying a few configuration properties of a few configuration files
- The development test environment automatically triggers build and deployment when pushing code, and the production environment automatically triggers deployment when adding a version tag to the Master branch and pushing a tag
- The overall interaction process is shown below
Project profile
First we need to add some necessary configuration files to the root path of the project, as shown below
Include:
- Dockerfile is a file used to build a Docker image (see Docker notes (11) : Dockerfile details and best practices)
- Helm configuration file, Helm is Kubernetes package management tool, can package application Deployment, Service, Ingress and so on for release and management (Helm details will be added later)
- The Jenkinsfile, Jenkins’ Pipeline definition file, defines the tasks to be performed at each stage
Dockerfile
Add a Dockerfile file to the root directory of the project. Define how to build a Docker image. Take the Spring Boot project as an example.
FROM frolvlad/alpine- Java :jdk8-slim # You can modify ARG profile ENV by using --build-args profile= XXX SPRING_PROFILES_ACTIVE=${profile} # project port EXPOSE 8000 WORKDIR/MNT # modify time zone RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.ustc.edu.cn/g' /etc/apk/repositories \ && apk add --no-cache tzdata \ && ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \ && echo "Asia/Shanghai" > /etc/timezone \ && apk del tzdata \ && rm -rf /var/cache/apk/* / TMP /* /var/tmp/* $HOME/. Cache COPY./target/your- project-name-1.0-snapshot.jar ENTRYPOINT ["java", "-jar", "/mnt/app.jar"]Copy the code
Expose SPRING_PROFILES_ACTIVE through the profile parameter. During the build process, you can set SPRING_PROFILES_ACTIVE dynamically by –build-args profile= XXX to meet the requirements of image building in different environments.
SPRING_PROFILES_ACTIVE can be set by using the Docker run -e SPRING_PROFILES_ACTIVE= XXX when the Docker container is started. Therefore, it is specified at mirror build time through ARG
Helm Configuration File
Helm is a package management tool for Kubernetes, which packages Deployment, Service, Ingress, etc. related to application Deployment for distribution and management (it can be stored in the repository like Docker images). Configuration files for Helm, as shown in the figure above, include:
│ ├── Deployment.yaml - Deployment Configuration Template │ ├─ Deployment.yaml - Deployment Configuration Template │ ├─ Deployment.yaml - Deployment Configuration Template │ ├─ Deployment.yaml - Deployment │ ├── Ingress.yaml-ingress configuration template │ ├─ Ingress.yaml-ingress Configuration template │ ├─ Ingress.yaml-ingress Configuration template │ ├─ Ingress.yaml-ingress configuration template │ ├─ notes.txt - Chart package help information file. Perform helm after the success of the install command will output the contents of this file │ └ ─ ─ service. The yaml - service configuration templates, configuration access service abstraction of Pod, Such as NodePort and ClusterIp | ─ ─ values. The yaml - chart package parameters configuration file, the template file can refer to the parameter ├ ─ ─ chart. The yaml - chart definition, define the name of the chart, Version number information ├─ charts - dependent sub package directory, which can contain multiple dependencies chart package, generally does not exist dependencies, I will delete it hereCopy the code
We can define the Chart name (similar to the installation package name) for each project in chart.yaml, as in
apiVersion: v2
name: your-chart-name
description: A Helm chart for Kubernetes
type: application
version: 1.0.0
appVersion: 1.16.0Copy the code
In values.yaml, define the variables needed in the template file, such as
# deployment Pod replicas, that is, how many containers replicaCount run: 1 # container mirror configuration image: repository: registry.cn-hangzhou.aliyuncs.com/demo/demo pullPolicy: Always # Overrides the image tag whose default is the chart version. Tag: "dev" # imagePullSecrets: name: Aliyun-registry-secret # override: "fullnameOverride: "" [] # Specifies whether a service account should be created: false # Annotations to add to the service account annotations: {} name: "" podAnnotations: {} podSecurityContext: {} # fsGroup: 2000 securityContext: {} # capabilities: # drop: # - ALL # readOnlyRootFilesystem: true # runAsNonRoot: True # runAsUser: 1000 # ClusterIp service: type: NodePort 8000 # external access Ingress configuration, need to configure the hosts some of the Ingress: enabled: true annotations: # {} kubernetes. IO/Ingress. The class: nginx # kubernetes.io/tls-acme: "true" hosts: - host: demo.com paths: ["/demo"] tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local #.... Other default parameter configurations are omittedCopy the code
Here you add the container section to the default generation, where you can specify the port number of the container without changing the template file (making the template file generic across projects, which usually doesn’t need to be changed), add env configuration, and pass environment variables into the container at helm deployment time. Change Service Type from default ClusterIp to NodePort. When different projects of the same type are deployed, only a few configuration items of chart. yaml and values.yaml files need to be configured according to the project situation. The template files in the templates directory can be reused directly.
When deploying, you need to pull the image from the Docker image repository in the K8s environment, so you need to create the image repository access credentials in K8s (imagePullSecrets).
# login Docker Registry generated/root/Docker/config. The json file sudo Docker login - username = your username Registry.cn-shenzhen.aliyuncs.com # create namespace develop # Create a secret kubectl create secret generic aliyun-registry-secret in namespace Develop --from-file=.dockerconfigjson=/root/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=developCopy the code
Jenkinsfile
Jenkinsfile is the Jenkins Pipeline configuration file, which follows Groovy syntax. For the construction and deployment of Spring Boot project, Jenkinsfile script file is written as follows:
Image_tag = "default" Pipeline {agent any environment {GIT_REPO = "${env.gitlabSourcereponame}" // From Jenkins GIT_BRANCH = "${env.gitlabtargetBranch}" GIT_TAG = sh(returnStdout: true,script: 'Git describe --tags --always').trim() // Commit ID or tag names DOCKER_REGISTER_CREDS = credentials('aliyun-docker-repo-creds') // Docker registry credential KUBE_CONFIG_LOCAL = credentials('local-k8s-kube-config') The credentials('prod-k8s-kube-config') in the production environment DOCKER_REGISTRY = "registry.cn-hangzhou.aliyuncs.com DOCKER_NAMESPACE = "your-namespace" // namespace DOCKER_IMAGE = "${DOCKER_REGISTRY}/${DOCKER_NAMESPACE}/${GIT_REPO}" INGRESS_HOST_DEV = "dev.your-site.com" // Domain name of the development environment INGRESS_HOST_TEST = "test.your-site.com" // Domain name of the test environment INGRESS_HOST_PROD = "prod.your-site.com" // Domain name of production environment} parameters {string(name: 'ingress_path', defaultValue: '/your-path', description: 'service context path') string(name: 'replica_count', defaultValue: '1', description: 'C ')} stages {stage('Code Analyze') {agent any steps {echo "1. "}} stage(' Build') {agent {docker {image 'Maven: 3-JDK-8-alpine 'args '-v $HOME/.m2:/root/.m2'}} steps { echo "2. Encoding =UTF-8 -dskiptests =true'}} stage('Docker Build') {agent any steps {echo "sh 'MVN clean package-dfile. encoding=UTF-8 -dskiptests =true'}} stage('Docker Build') {agent any steps {echo "3. Build Docker image "echo" ${DOCKER_IMAGE}" sudo Docker login -u ${DOCKER_REGISTER_CREDS_USR} -p ${DOCKER_REGISTER_CREDS_PSW} ${DOCKER_REGISTRY}" script { def profile = "dev" if (env.gitlabTargetBranch == "develop") { image_tag = "dev." + env.GIT_TAG } else if (env.gitlabTargetBranch == "pre-release") { image_tag = "test." + env.GIT_TAG profile = "test" } Else if (env.gitlabTargetBranch == "master"){image_tag = env.git_tag profile = "prod"} // Set profile with --build-arg, Sh "docker build --build-arg profile=${profile} -t ${DOCKER_IMAGE}:${image_tag} ${DOCKER_IMAGE}:${image_tag}" sh "docker rmi ${DOCKER_IMAGE}:${image_tag}" } } } stage('Helm Deploy') { agent { docker { image 'lwolf/helm-kubectl-docker' args '-u root:root' } } steps { echo "4. Kube "script {def kube_config = env.kube_config_local def ingress_host = env.INGRESS_HOST_DEV if (env.gitlabTargetBranch == "pre-release") { ingress_host = env.INGRESS_HOST_TEST } else if (env.gitlabTargetBranch == "master"){ ingress_host = env.INGRESS_HOST_PROD kube_config = env.KUBE_CONFIG_PROD } sh "echo ${kube_config} | base64 - > d/root /. Kube/config "/ / according to the different environment will service deployment to a different namespace, Sh "helm upgrade -i --namespace=${env.gitlabtargetBranch} --set replicaCount=${params.replica_count} --set image.repository=${DOCKER_IMAGE} --set image.tag=${image_tag} --set nameOverride=${GIT_REPO} --set ingress.hosts[0].host=${ingress_host} --set ingress.hosts[0].paths={${params.ingress_path}} ${GIT_REPO} ./helm/" } } } } }Copy the code
Jenkinsfile defines the entire process for automated build deployment:
- Code Analyze, which can be performed using a static Code analysis tool such as SonarQube, is ignored here
- Maven Build, start a Maven Docker container to complete the Maven Build package of the project, mount the Maven local repository directory to the host, avoid the need to re-download the dependency package each time
- Docker builds Docker images and pushes them to the image repository. The images of different environments are differentiated by tags. The development environment uses dev.commitId, such as dev.88f5822, and the test environment uses test.commitId. Production environments can set Webhook events to Tag push events, using tag names directly
- Helm Deploy, use the Helm to Deploy new projects or upgrade existing projects. Configure parameters for different environments, such as the access domain name and the access certificate kube_config of the K8s cluster
Jenkins configuration
Jenkins Task Configuration
Create a pipeline task in Jenkins, as shown
Configure the build trigger and set the target branch to the Develop branch to generate a token, as shown in the figure
Note down the “GitLab Webhook URL” and the token value here for use in the GitLab configuration.
Configure the Pipeline, select “Pipeline Script from SCM” to obtain the Pipeline script file from the project source code, configure the project Git address, and pull the source code certificate, as shown in the figure
Save the Jenkins configuration of the project development environment. The test environment only needs to change the corresponding branch to pre-release
Jenkins credential configuration
In Jenkinsfile, we use two access credentials — Docker Registry credentials and kube credentials for local K8s,
DOCKER_REGISTER_CREDS = credentials('aliyun-docker-repo-creds') // Docker registry credential KUBE_CONFIG_LOCAL = Credentials ('local-k8s-kube-config') // Develop the kube credentials for the test environmentCopy the code
These two credentials need to be created in Jenkins.
Add the Docker Registry login credentials. On the Jenkins Credentials page, add a credential of user name and password type, as shown in the figure
Add K8s cluster access credentials, on the master node /root/.kube/config file content base64 encoding,
base64 /root/.kube/config > kube-config-base64.txt
cat kube-config-base64.txtCopy the code
Create a credential of type Secret Text in Jenkins using the encoded content, as shown in the figure below
Enter the base64 encoded content in the Secret text box.
Gitlab configuration
Configure a Webhook in settings-Integrations page of Gitlab project, fill “Gitlab Webhook URL” and Token value of Jenkins trigger in URL and Secret Token. Select “Push Events” as the trigger, as shown
If “Push Events” is selected in the development and test environment, the Jenkins Pipeline task of the development or test environment will be triggered to complete automatic construction when the developer pushes or merges the code into the Develop or pre-release branch. Production selects “Tag Push Events” to trigger an automated build when a Tag is pushed to the Master branch. Figure is the Pipeline build view
conclusion
This paper introduces the use of Gitlab+Jenkins Pipeline+Docker+Kubernetes+Helm to realize the automatic deployment of Spring Boot project. This can be applied to other Spring Boot-based projects with minor modifications (detailed modifications are described in the Readme file of the source code).
All configuration files involved in this article (including the server project based on Spring Boot and the Web project based on vue.js) can be obtained in the source project (source address: follow the public account “Half Way Yuge”, enter “k8sops” on the home page).
Original address: blog.jboost.cn/k8s3-cd.htm…
Author: Yuge Welcome to pay attention to the author’s wechat public number: Halfway Yuge, learn and grow together