Jenkins Pipeline is introduced

To realize the construction work in Jenkins, there can be a variety of ways, we use the more commonly used Pipeline. Pipeline, simply speaking, is a set of workflow framework running on Jenkins. It connects tasks that originally run independently on a single or multiple nodes to achieve complex process choreography and visualization that is difficult to be completed by a single task.

Jenkins Pipeline has several core concepts:

  • Node: a Node is a Jenkins Node, Master or Agent, which is the specific operating environment for executing Step. For example, Jenkins Slave, which was dynamically run before, is a Node Node
  • Stages: A Pipeline can be divided into stages, each representing a set of operations, such as Build, Test, and Deploy. Stages are logical groups that span multiple nodes
  • Step: Step is the most basic operation unit. It can print a sentence or build a Docker image, which is provided by various Jenkins plug-ins. For example, the command sh ‘make’ is equivalent to executing the make command in shell terminal.

So how do we create Jenkins Pipline?

  • Pipeline scripts are implemented by the Groovy language, but there is no need to learn Groovy alone, if you can
  • Pipelines support two grammars: Declarative and Scripted Pipeline grammars
  • There are also two ways to create a Pipeline: you can directly enter a script into Jenkins’ Web UI; You can also create a Jenkinsfile script file to put into the project source library
  • Loading Jenkinsfile Pipeline directly from source control (SCMD) is generally recommended in Jenkins

Create a simple Pipeline

Here we will give you a quick creation of a simple Pipeline, directly in Jenkins Web UI interface input script to run.

  • Create a Job: In the Web UI, click New Item -> enter the name: pipeline-demo -> select the following pipeline- > click OK
  • Configuration: Enter the following Script in the bottom Pipeline field and click Save.
    node {
      stage('Clone') {
        echo "1.Clone Stage"
      }
      stage('Test') {
        echo "2.Test Stage"
      }
      stage('Build') {
        echo "3.Build Stage"
      }
      stage('Deploy') {
        echo "4. Deploy Stage"}}Copy the code
  • Build: Click Build Now in the left field to see the Job start building

After a while, when the build is complete, you can click on Console Output in the left area and see the following Output:

We can see that the above four output statements in our Pipeline script are printed, which proves to be as expected.

If you are not familiar with Pipeline Syntax, you can go to the following link of the input script Pipeline Syntax to check, here is a lot of introduction to Pipeline Syntax, can also automatically help us generate some scripts.

Build tasks in Slave

Above we created a simple Pipeline task, but we can see that the task does not run in Jenkins’ Slave, so how do we make our task run in Jenkins’ Slave? Remember last time when we added Slave Pod, we had to remember the added label? Yes, we need to use this label. We will re-edit the Pipeline script created above and add a label attribute to Node as follows:

node('haimaxy-jnlp') {
    stage('Clone') {
      echo "1.Clone Stage"
    }
    stage('Test') {
      echo "2.Test Stage"
    }
    stage('Build') {
      echo "3.Build Stage"
    }
    stage('Deploy') {
      echo "4. Deploy Stage"}}Copy the code

We just add a label like Haimaxy-jnlp to node and save it. Check the Pod in kubernetes cluster before building:

$ kubectl get pods -n kube-ops
NAME                       READY     STATUS              RESTARTS   AGE
jenkins-7c85b6f4bd-rfqgv   1/1       Running             4          6d
Copy the code

Then re-trigger the build immediately:

$ kubectl get pods -n kube-ops
NAME                       READY     STATUS    RESTARTS   AGE
jenkins-7c85b6f4bd-rfqgv   1/1       Running   4          6d
jnlp-0hrrz                 1/1       Running   0          23s
Copy the code

We noticed that an additional Pod called JNLP-0HRRz was running, and after a while the Pod was no longer running:

$ kubectl get pods -n kube-ops
NAME                       READY     STATUS    RESTARTS   AGE
jenkins-7c85b6f4bd-rfqgv   1/1       Running   4          6d
Copy the code

This also proves that our Job construction is completed. Also go back to Jenkins’ Web UI and check the Console Output, you can see the following information:

Does it also prove that our current task is running in the Pod dynamically generated above, as we expect. We go back toMain interface of the Job, you can also see the Stage View interface that you are probably familiar with:

pipeline demo3

Deploy the Kubernetes application

Now that we’ve seen how to build tasks in Jenkins Slave, how to deploy a native Kubernetes application? To deploy Kubernetes applications, we need to be familiar with our previous application deployment process:

  • Write the code
  • test
  • Write Dockerfile
  • Build a packaged Docker image
  • Push the Docker image to the repository
  • Write Kubernetes YAML files
  • Change the Docker image TAG in a YAML file
  • Deploy the application using kubectl tools

The process we used to deploy a native application in Kubernetes is basically the same as the process above, right? Now we need to put these processes into Jenkins to automatically do it for us (except coding of course), from testing to updating YAML files is CI process, and then deployment is CD process. If we were to write a Pipeline script following our example above, how would we do it?

node('haimaxy-jnlp') {
    stage('Clone') {
      echo "1.Clone Stage"
    }
    stage('Test') {
      echo "2.Test Stage"
    }
    stage('Build') {
      echo "3.Build Docker Image Stage"
    }
    stage('Push') {
      echo "4.Push Docker Image Stage"
    }
    stage('YAML') {
      echo "5. Change YAML File Stage"
    }
    stage('Deploy') {
      echo "6. Deploy Stage"}}Copy the code

Here we will deploy a simple Golang program to kubernetes environment, the code link: github.com/cnych/jenki… . If we followed the previous example, would we write the Pipeline script like this:

  • Step one, the Clone code
  • Step two, take the test, and if you pass the test, proceed to the next task
  • The third step, because Dockerfile is basically put into the source code for management, so we are directly building Docker image here
  • The fourth step, the image packaging is complete, should be pushed to the mirror warehouse
  • Step 5: After the image push is complete, do you need to change the image TAG in the YAML file to the image TAG
  • Step 6, everything is ready except for the final step, which is deployment using the Kubectl command line tool

At this point, our entire CI/CD process will be completed.

Let’s go through the details of each step:

Step one, the Clone code

stage('Clone') {
    echo "1.Clone Stage"
    git url: "https://github.com/cnych/jenkins-demo.git"
}
Copy the code

Step two, test

Since this is relatively simple, we can ignore this step

Step 3: Build the image

stage('Build') {
    echo "3.Build Docker Image Stage"
    sh "docker build -t cnych/jenkins-demo:${build_tag} ."
}
Copy the code

We usually use the docker build command directly when building, so what about this place? Does the Slave Pod image we provided for you In the last class adopt the Docker In Docker mode? That is to say, we can also directly use the Docker build command In the Slave. So we use sh directly to execute the docker build command, but what about the image tag? Git commit tag is the latest tag in git commit. Git commit tag is the latest tag in Git commit tag. Git commit tag is the latest tag in Git commit tag. It is also convenient to check in the future. However, since this tag is not only needed for our stage, but also for the next push image, we write this tag as a common parameter and put it in the Clone stage. In this way, the first two stages will look like the following:

stage('Clone') {
    echo "1.Clone Stage"
    git url: "https://github.com/cnych/jenkins-demo.git"
    script {
        build_tag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
    }
}
stage('Build') {
    echo "3.Build Docker Image Stage"
    sh "docker build -t cnych/jenkins-demo:${build_tag} ."
}
Copy the code

Step 4: Push the image

The image construction is complete, now we need to push the image built here to the image repository, of course, if you have a private image repository can also be done, we do not build their own private repository, so directly use docker Hub.

In the following courses, we learned how to build a Harbor private warehouse, and then changed it into a Harbor warehouse

We know that Docker Hub is a public image warehouse, anyone can get the image above, but to push the image we need to use an account, so we need to register a Docker Hub account in advance, remember the user name and password, we need to use here. Normally, when we push the docker image locally, do we need to use the Docker login command, and then enter the user name and password. After the authentication is passed, we can use the Docker push command to push the local image to the Docker hub. If so, The Pipeline here should be written like this:

stage('Push') {
    echo "4.Push Docker Image Stage"
    sh "docker login -u cnych -p xxxxx"
    sh "docker push cnych/jenkins-demo:${build_tag}"
}
Copy the code

If we had just done this in Jenkins’ Web UI, we could have written the Pipeline like this, but do we recommend using Jenkinsfile in source code for versioning? So we the docker warehouse directly to the user name and password are exposed to others that is obviously very, very unsafe, what’s more, we used here is making public warehouse code, everyone can see our source directly, so we should use a way to hide the user name and password that private information, Luckily Jenkins provided us with a solution.

On the front page, click on the Jenkins -> Global Credentials (unrestricted) -> Add Credentials: Add a Username with password authentication as follows:

Enter the docker Hub user name and password, ID part we enterdockerHubNote that this value is very important and will be used in the Pipeline script later.

With the docker Hub username and password authentication information above, we can now use this username and password in Pipeline:

stage('Push') {
    echo "4.Push Docker Image Stage"
    withCredentials([usernamePassword(credentialsId: 'dockerHub', passwordVariable: 'dockerHubPassword', usernameVariable: 'dockerHubUser')]) {
        sh "docker login -u ${dockerHubUser} -p ${dockerHubPassword}"
        sh "docker push cnych/jenkins-demo:${build_tag}"}}Copy the code

Notice that we use a new function withCredentials in the stage, where one of the credentialsId values is the ID value we just created, and the corresponding username variable is the ID value plus the User, The Password variable is the ID value plus Password, and then we can directly use the two variable values in the script to directly replace the previous user name and Password for logging in to docker Hub. Is it safe now? I just passed in two variables, and others do not know my real user name and Password. Only our own Jenkins platform additions.

Step 5, change YAML

We have finished the image packaging, push work, next we should update the image version in Kubernetes system, of course, in order to facilitate maintenance, we use the form of YAML file to write application deployment rules, such as our here YAML file :(k8s.yaml)

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: jenkins-demo
spec:
  template:
    metadata:
      labels:
        app: jenkins-demo
    spec:
      containers:
      - image: cnych/jenkins-demo:<BUILD_TAG>
        imagePullPolicy: IfNotPresent
        name: jenkins-demo
        env:
        - name: branch
          value: <BRANCH_NAME>
Copy the code

For those familiar with Kubernetes, the YAML file above will be familiar. We use a Deployment resource object to manage the Pod, which uses the image we pushed above. The only difference is that the tag of the Docker image is not the specific tag we usually see, but a logo. In fact, if we replace this logo with the tag of the Docker image above, is it the image we need to use in this construction? How do I replace that? We can do this with a sed command:

stage('YAML') {
    echo "5. Change YAML File Stage"
    sh "sed -i 's/<BUILD_TAG>/${build_tag}/' k8s.yaml"
    sh "sed -i 's/<BRANCH_NAME>/${env.BRANCH_NAME}/' k8s.yaml"
}
Copy the code

The sed command above replaces the identifier in the k8s.yaml file with the value of the variable build_tag.

Step 6: Deploy

Kubernetes YAML file has been changed, so we can use kubectl apply to update the application directly. Of course, here we just write into the Pipeline, the idea is the same:

stage('Deploy') {
    echo "6. Deploy Stage"
    sh "kubectl apply -f k8s.yaml"
}
Copy the code

So at this point our whole process is complete.

Artificial confirmation

Theoretically speaking, the above six steps have been completed, but in general, some manual intervention steps may be needed in our actual project practice. Why? We submitted a code, for example, the test is passed, the mirror also packaged posted, but this version is not necessarily to come online to production immediately, right, we may need to be the version published to the first test, QA, or preview environment, such as the overall release directly to the online environment is quite rare, Therefore, we need to add the link of manual confirmation. Generally, manual intervention is needed in the link of CD. For example, in the last two steps here, we can add confirmation in front of it, such as:

stage('YAML') {
    echo "5. Change YAML File Stage"
    def userInput = input(
        id: 'userInput',
        message: 'Choose a deploy environment',
        parameters: [
            [
                $class: 'ChoiceParameterDefinition',
                choices: "Dev\nQA\nProd",
                name: 'Env']])echo "This is a deploy step to ${userInput.Env}"
    sh "sed -i 's/<BUILD_TAG>/${build_tag}/' k8s.yaml"
    sh "sed -i 's/<BRANCH_NAME>/${env.BRANCH_NAME}/' k8s.yaml"
}
Copy the code

We use the input keyword, which uses a Choice list to let the user choose. After selecting the deployment environment, we can also do some operations for different environments. For example, we can deploy YAML files in different environments under different namespaces. Add different tags and so on:

stage('Deploy') {
    echo "6. Deploy Stage"
    if (userInput.Env == "Dev") {
      // deploy dev stuff
    } else if (userInput.Env == "QA"){
      // deploy qa stuff
    } else {
      // deploy prod stuff
    }
    sh "kubectl apply -f k8s.yaml"
}
Copy the code

Since this step is also a deployment step, we can combine the last two steps into one step. Our final Pipeline script is as follows:

node('haimaxy-jnlp') {
    stage('Clone') {
        echo "1.Clone Stage"
        git url: "https://gitee.com/bingh/k8s_cicd.git"
        script {
            build_tag = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
        }
    }
    stage('Test') {
      echo "2.Test Stage"
    }
    stage('Build') {
        echo "3.Build Docker Image Stage"
        sh "docker build -t your-docker-hub/jenkins-demo:${build_tag} ."
    }
    stage('Push') {
        echo "4.Push Docker Image Stage"
        withCredentials([usernamePassword(credentialsId: 'dockerHub', passwordVariable: 'dockerHubPassword', usernameVariable: 'dockerHubUser')]) {
            sh "docker login -u ${dockerHubUser} -p ${dockerHubPassword}"
            sh "docker push your-docker-hub/jenkins-demo:${build_tag}"
        }
    }
    stage('Deploy') {
        echo "5. Deploy Stage"
        def userInput = input(
            id: 'userInput',
            message: 'Choose a deploy environment',
            parameters: [
                [
                    $class: 'ChoiceParameterDefinition',
                    choices: "Dev\nQA\nProd",
                    name: 'Env']])echo "This is a deploy step to ${userInput}"
        sh "sed -i 's/<BUILD_TAG>/${build_tag}/' k8s.yaml"
        sh "sed -i 's/<BRANCH_NAME>/${env.BRANCH_NAME}/' k8s.yaml"
        if (userInput == "Dev") {
            // deploy dev stuff
        } else if (userInput == "QA"){
            // deploy qa stuff
        } else {
            // deploy prod stuff
        }
        sh "kubectl apply -f k8s.yaml"}}Copy the code

Now let’s reconfigure the jenkins-demo task in the Jenkins Web UI, paste the above Script into the Script area, save it, and click Build Now on the left to trigger the Build. After a while, we’ll see the Stage View pause:

This is where we added human validation to the Deploy phase above, so the build is paused and we need to verify it manually. For example, we select QA here and click Proceed to Proceed and the build is successful. We can see the following log information at the Deploy Stage of the Stage View:

With QA printed out, and the same selection we just made, let’s go to the Kubernetes cluster and look at the deployed application:

$ kubectl get deployment -n kube-ops NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE jenkins 1 1 1 1 7d jenkins-demo 1 1 1  0 1m $ kubectl get pods -n kube-ops NAME READY STATUS RESTARTS AGE jenkins-7c85b6f4bd-rfqgv 1/1 Running 4 7d jenkins-demo-f6f4f646b-2zdrq   0/1       Completed   4          1m
$ kubectl logs jenkins-demo-f6f4f646b-2zdrq -n kube-ops Hello, Kubernetes! I'M from Jenkins CI!Copy the code

We can see that our application has been correctly deployed to the Kubernetes cluster environment.

Jenkinsfile

Long march, it seems that our task is complete, in fact, we just completed a manual task to add the build process, in the actual work practice, we are more to write the Pipeline script to Jenkinsfile, and then submit the code together with the code repository for version management. Copy the script to a Jenkinsfile and place it in the git repository. Now that you are in the repository, git Clone is not required. Therefore, we need to remove the git Clone step in the first Clone operation, for reference: github.com/cnych/jenki…

To change the Jenkins demo task, go to Configure -> the Pipeline area at the bottom -> change the previous Pipeline Script to Pipeline Script from SCM Then fill in the corresponding repository configuration according to our actual situation, pay attention to Jenkinsfile script path:

Finally, click Save. Now let’s make some random changes to the source code, such as changing the first step in Jenkinsfile to Prepare, and commit the code.