background

At present, the Kubernetes cluster environment has been set up with 5 servers. Monitoring and log collection have been implemented, and the business has been manually migrated to the cluster to run smoothly. Therefore, the ORIGINAL CICD process based on the native Docker environment needs to be migrated to the Kubernetes cluster

advantage

The Kubernetes cluster implementation of CICD has several significant advantages

  1. DeploymentNatural support for rolling deployment, combined with otherKubernetesFeatures also enable blue-green deployment, Canary deployment, and more
  2. The new version of theGitLabwithGitLab RunnerNatural supportKubernetesClustering, supportrunnerAutomatic scaling reduces resource occupation

The environment

Kubernetes version: 1.14

GitLab version: 12.2.5

Gitlab-runner version: 12.1.0

Docker Environment version: 17.03.1

GitLab – Runner deployment

Configuration is introduced

Gitlab Runner in the original environment is divided into two parts by manually executing the registration command and start command provided on the official website, requiring more manual operations. However, in Kubernetes, it supports one-click deployment using Helm, as shown in the official document below

GitLab Runner Helm Chart

In fact, the guidelines of the official documentation are not clear, many configurations are not introduced in the documentation. It is recommended to go to the source repository to view the detailed parameter usage documentation

The Kubernetes executor

It describes several key configurations that will be used later in modifying the CI configuration file of the project

DinD builds are no longer recommended

Official Documentation

Use docker-in-docker workflow with Docker executor

The second approach is to use the special docker-in-docker (dind) Docker image with all tools installed (docker) and run the job script in context of that image in privileged mode.

Note: docker-compose is not part of docker-in-docker (dind). To use docker-compose in your CI builds, follow the docker-compose installation instructions.

Danger: By enabling --docker-privileged, you are effectively disabling all of the security mechanisms of containers and exposing your host to privilege escalation which can lead to container breakout. For more information, check out the official Docker documentation on Runtime privilege and Linux capabilities.

Docker-in-Docker works well, and is the recommended configuration, but it is not without its own challenges:

  • When using docker-in-docker, each job is in a clean environment without the past history. Concurrent jobs work fine because every build gets it’s own instance of Docker engine so they won’t conflict with each other. But this also means jobs can be slower because there’s no caching of layers.
  • By default, Docker 17.09 and higher uses --storage-driver overlay2 which is the recommended storage driver. See Using the overlayfs driver for details.
  • Since the Docker: 19.03.1 - dindContainer and the Runner container don’t share their root filesystem, The job’s working directory can be used as a mount point for child containers. For example, if you have files you want to share with a child container, you may create a subdirectory under/builds/$CI_PROJECT_PATH and use it as your mount point (for a more thorough explanation, check issue #41227) :

In short, container construction using DinD is not impossible, but it faces many problems. For example, using overlay2 network requires Docker version higher than 17.09

Using docker:dind

Running the docker:dind also known as the docker-in-docker image is also possible but sadly needs the containers to be run in privileged mode. If you’re willing to take that risk other problems will arise that might not seem as straight forward at first glance. Because the docker daemon is started as a service usually in your .gitlab-ci.yaml it will be run as a separate container in your Pod. Basically containers in Pods only share volumes assigned to them and an IP address by which they can reach each other using localhost. /var/run/docker.sock is not shared by the docker:dind container and the docker binary tries to use it by default.

To overwrite this and make the client use TCP to contact the Docker daemon, in the other container, be sure to include the environment variables of the build container:

  • DOCKER_HOST=tcp://localhost:2375 for no TLS connection.
  • DOCKER_HOST=tcp://localhost:2376 for TLS connection.

Make sure to configure those systems. As of Docker 19.03, TLS is enabled by default but it requires mapping certificates to your client. You can enable non-TLS connection for DIND or mount certificates as described in Use Docker In Docker Workflow wiht Docker executor

After version 19.03.1 of Docker, TLS configuration is enabled by default. It needs to be declared in the environment variable of construction, otherwise Docker error will be reported that Docker cannot be connected. Moreover, DinD construction requires Runner to enable privileged mode to access host resources, and because privileged mode is used, The restrictions on the resources that runner needs to use in Pod will expire

Build Docker images using Kaniko

Currently, there is another official way to build and push images in Docker containers, which is more elegant and can achieve seamless migration, that is Kaniko

Building a Docker image with kaniko

Its advantages are described below on the official website

Another way to build Docker images in a Kubernetes cluster is to use Kaniko. Iko son

  • Allows you to build images without privileged access.
  • Docker daemon is not required to work.

In the following practice, two ways will be used to build Docker images, which can be selected according to the actual situation

Deployment using Helm

Pull the Helm Gitlab-Runner repository to the local and modify the configuration

GitLab Runner

The original Gitlab-Runner configuration was migrated to Helm as follows

image: Alpine - v12.1.0
imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.fjy8018.top/
runnerRegistrationToken: "ZXhpuj4Dxmx2tpxW9Kdr"
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 10
checkInterval: 30
rbac:
  create: true
  clusterWideAccess: false
metrics:
  enabled: true
  listenPort: 9090
runners:
  image: Ubuntu: 16.04
  imagePullSecrets:
    - name: registry-secret
  locked: false
  tags: "k8s"
  runUntagged: true
  privileged: true
  pollTimeout: 180
  outputLimit: 4096
  cache: {}
  builds: {}
  services: {}
  helpers: {}
resources:
   limits:
     memory: 2048Mi
     cpu: 1500m
   requests:
     memory: 128Mi
     cpu: 200m
affinity: {}
nodeSelector: {}
tolerations: []
hostAliases:
   - ip: "192.168.1.13"
     hostnames:
     - "gitlab.fjy8018.top"
   - ip: "192.168.1.30"
     hostnames:
     - "harbor.fjy8018.top"
podAnnotations: {}
Copy the code

Configure the private key, Intranet Harbor address, harbor pull resource private key, and resource restriction policy

Gitlab-runner selects possible pits

Select runner mirror alpine-v12.1.0, this point alone, the latest runner version is 12.5.0, but it has many problems, Alpine new version of the mirror in Kubernetes intermittent failure to resolve DNS problems, You know, gitlab-Runner is Could Not Resolve Host and Server misbehaving

Refer to solutions

Through the query, it is found that there are still several related issues in its official warehouse

Official GitLAB: Kubernetes Runner: Could not resolve host

Stackoverflow: Gitlab Runner is not able to resolve DNS of Gitlab Server

The solutions offered were invariably downgrades to Alpin-V12.1.0

We had same issue for couple of days. We tried change CoreDNS config, move runners to different k8s cluster and so on. Finally today i checked my personal runner and found that i’m using Runners in cluster had Gitlab/Gitlab-Runner: Alpine-v12.3.0, When mine had Gitlab/gitlab-Runner :alpine-v12.0.1. We added line

image: Gitlab/gitlab - runner: alpine - v12.1.0
Copy the code

in values.yaml and this solved problem for us

The root of the problem is that Alpine base mirroring has problems with Kubernetes cluster support,

ndots breaks DNS resolving #64924

The Docker-Alpine repository also has an open issue that mentions DNS resolution timeouts and exceptions

DNS Issue #255

The installation

A single command to install

$ helm install /root/gitlab-runner/ --name k8s-gitlab-runner --namespace gitlab-runner
Copy the code

The output is as follows

NAME:   k8s-gitlab-runner
LAST DEPLOYED: Tue Nov 26 21:51:57 2019
NAMESPACE: gitlab-runner
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                             DATA  AGE
k8s-gitlab-runner-gitlab-runner  5     0s

==> v1/Deployment
NAME                             READY  UP-TO-DATE  AVAILABLE  AGE
k8s-gitlab-runner-gitlab-runner  0/1    1           0          0s

==> v1/Pod(related)
NAME                                              READY  STATUS   RESTARTS  AGE
k8s-gitlab-runner-gitlab-runner-744d598997-xwh92  0/1    Pending  0         0s

==> v1/Role
NAME                             AGE
k8s-gitlab-runner-gitlab-runner  0s

==> v1/RoleBinding
NAME                             AGE
k8s-gitlab-runner-gitlab-runner  0s

==> v1/Secret
NAME                             TYPE    DATA  AGE
k8s-gitlab-runner-gitlab-runner  Opaque  2     0s

==> v1/ServiceAccount
NAME                             SECRETS  AGE
k8s-gitlab-runner-gitlab-runner  1        0s


NOTES:

Your GitLab Runner should now be registered against the GitLab instance reachable at: "https://gitlab.fjy8018.top/"
Copy the code

Check the Gitlab admin page and find that there has been a runner registered successfully

Project configuration

Build the required configuration in DinD fashion

TLS configuration is required if the original CI file is built based on 19.03 DinD image

image: Docker: 19.03

variables:
  DOCKER_DRIVER: overlay
  DOCKER_HOST: tcp://localhost:2375
  DOCKER_TLS_CERTDIR: ""
.
Copy the code

The rest of the configuration remains the same and is built using DinD

Kubectl and Kubernetes permission configuration

Due to the use of K8S cluster, and through cluster deployment need to use Kubectl client, so manually created a Kubectl Docker image, using Gitlab trigger DockerHub construction, construction content open and transparent, can be trusted to use, If there are other build requirements, pull Request can also be raised, which will be added later. Currently, only 1.14.0 is used

fjy8018/kubectl

With kubectl client, you also need to configure connection TLS and connection account

To ensure security, create a ServiceAccount that accesses the project namespace

apiVersion: v1
kind: ServiceAccount
metadata:
  name: hmdt-gitlab-ci
  namespace: hmdt
Copy the code

Using the RBAC mechanism provided by the cluster, this account is granted admin privileges for the namespace

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: hmdt-gitlab-role
  namespace: hmdt
subjects:
  - kind: ServiceAccount
    name: hmdt-gitlab-ci
    namespace: hmdt
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: admin
Copy the code

The unique name generated in the K8S cluster is hMDT-Gitlab-CI-token-86n89

$ kubectl describe sa hmdt-gitlab-ci -n hmdt
Name:                hmdt-gitlab-ci
Namespace:           hmdt
Labels:              <none>
Annotations:         kubectl.`Kubernetes`.io/last-applied-configuration:
                       {"apiVersion":"v1"."kind":"ServiceAccount"."metadata": {"annotations": {},"name":"hmdt-gitlab-ci"."namespace":"hmdt"}}
Image pull secrets:  <none>
Mountable secrets:   hmdt-gitlab-ci-token-86n89
Tokens:              hmdt-gitlab-ci-token-86n89
Events:              <none>
Copy the code

Then follow Secret above to find the CA certificate

$ kubectl get secret hmdt-gitlab-ci-token-86n89 -n hmdt -o json | jq -r '.data["ca.crt"]' | base64 -d
Copy the code

Then find the corresponding Token

$ kubectl get secret hmdt-gitlab-ci-token-86n89  -n hmdt -o json | jq -r '.data.token' | base64 -d
Copy the code

Kubernetes associates the GitLab configuration

Go to the GitLab Kubernetes cluster configuration page, fill in the relevant information, let GitLab automatically connect to the cluster environment

Note that it is necessary to uncheck this box, otherwise Gitlab will automatically create a new user account, instead of using the user account that has been created, and an error without permission will be reported during operation

Gitlab creates a new user account, hmdt-prod-service-account, but does not have the permission to operate on the specified namespace

GitLab environment configuration

Create an environment

The name and URL can be customized as needed

CI Script Configuration

The final configuration CI file is as follows, which uses DinD to build the Dockerfile

image: Docker: 19.03

variables:
  MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode -Dmaven.test.skip=true"
  MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
  DOCKER_DRIVER: overlay
  DOCKER_HOST: tcp://localhost:2375
  DOCKER_TLS_CERTDIR: ""
  SPRING_PROFILES_ACTIVE: docker
  IMAGE_VERSION: "1.8.6"
  DOCKER_REGISTRY_MIRROR: "https://XXX.mirror.aliyuncs.com"

stages:
  - test
  - package
  - review
  - deploy

maven-build:
  image: maven:3-jdk-8
  stage: test
  retry: 2
  script:
    - mvn $MAVEN_CLI_OPTS clean package -U -B -T 2C
  artifacts:
    expire_in: 1 week
    paths:
      - target/*.jar

maven-scan:
  stage: test
  retry: 2
  image: maven:3-jdk-8
  script:
    - mvn $MAVEN_CLI_OPTS verify sonar:sonar

maven-deploy:
  stage: deploy
  retry: 2
  image: maven:3-jdk-8
  script:
    - mvn $MAVEN_CLI_OPTS deploy

docker-harbor-build:
  image: Docker: 19.03
  stage: package
  retry: 2
  services:
    - name: 19.03 dind docker:
      alias: docker
  before_script:
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
    - docker build --pull -t "$CI_REGISTRY_IMAGE:$IMAGE_VERSION" .
    - docker push "$CI_REGISTRY_IMAGE:$IMAGE_VERSION"
    - docker logout $CI_REGISTRY

deploy_live:
  image: Fjy8018 / kubectl: v1.14.0
  stage: deploy
  retry: 2
  environment:
    name: prod
    url: https://XXXX
  script:
    - kubectl version
    - kubectl get pods -n hmdt
    - cd manifests/
    - sed -i "s/__IMAGE_VERSION_SLUG__/${IMAGE_VERSION}/" deployment.yaml
    - kubectl apply -f deployment.yaml
    - kubectl rollout status -f deployment.yaml
    - kubectl get pods -n hmdt
Copy the code

If you need to build a Dockerfile using Kaniko, the configuration is as follows

IO/Kaniko-project /executor: DEBUG belongs to The Google image repository and may not be pulled

image: Docker: 19.03

variables:
  MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode -Dmaven.test.skip=true"
  MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
  DOCKER_DRIVER: overlay
  DOCKER_HOST: tcp://localhost:2375
  DOCKER_TLS_CERTDIR: ""
  SPRING_PROFILES_ACTIVE: docker
  IMAGE_VERSION: "1.8.6"
  DOCKER_REGISTRY_MIRROR: "https://XXX.mirror.aliyuncs.com"

cache:
  paths:
    - target/

stages:
  - test
  - package
  - review
  - deploy

maven-build:
  image: maven:3-jdk-8
  stage: test
  retry: 2
  script:
    - mvn $MAVEN_CLI_OPTS clean package -U -B -T 2C
  artifacts:
    expire_in: 1 week
    paths:
      - target/*.jar

maven-scan:
  stage: test
  retry: 2
  image: maven:3-jdk-8
  script:
    - mvn $MAVEN_CLI_OPTS verify sonar:sonar

maven-deploy:
  stage: deploy
  retry: 2
  image: maven:3-jdk-8
  script:
    - mvn $MAVEN_CLI_OPTS deploy


docker-harbor-build:
  stage: package
  retry: 2
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: ["] ""
  script:
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$IMAGE_VERSIONdeploy_live:
  image: Fjy8018 / kubectl: v1.14.0
  stage: deploy
  retry: 2
  environment:
    name: prod
    url: https://XXXX
  script:
    - kubectl version
    - kubectl get pods -n hmdt
    - cd manifests/
    - sed -i "s/__IMAGE_VERSION_SLUG__/${IMAGE_VERSION}/" deployment.yaml
    - kubectl apply -f deployment.yaml
    - kubectl rollout status -f deployment.yaml
    - kubectl get pods -n hmdt
Copy the code

Execution line

Runner automatically expands and shrinks the capacity

Runners in Kubernetes automatically scale up and down based on the number of tasks, currently up to 10

Grafana can also monitor resource usage during cluster construction

useDinDbuildDockerfileThe results of

useKanikobuildDockerfileThe results of the

Deploy the results

Gitlab automatically injects the configured Kubectl config when the deployment is performed

The build results

After the deployment is complete, you can view the deployment result on the environment configuration page. Only the successful deployment is recorded