Welcome to my GitHub
Github.com/zq2599/blog…
Content: all original article classification summary and supporting source code, involving Java, Docker, Kubernetes, DevOPS, etc.;
About GitLab CI
As shown in the figure below, after the developer submits the code to GitLab, CI scripts can be triggered to execute on GitLab Runner. By writing CI scripts, we can complete many functions: compilation, construction, docker image generation, push to private warehouse, etc. :
Content of this actual combat
Today, we will complete the following operations together:
- Deploy miniO, the cache function of pipeline script is realized by MiniO;
- Configure and deploy GitLab Runner;
- Write and run pipeline scripts;
Environment and version information
This actual combat involves a number of services, their version information is given below for your reference:
- GitLab: Community Edition 13.0.6
- GilLab Runner: 13.1.0
- Kubernetes: 1.15.3
- Harbor: 1.1.3
- Shall Minio: the 2020-06-18 T02: Z
- Helm: 2.16.1
Service prepared in advance is required
The following services need to be prepared before actual combat:
- Deploy GitLab, refer to “CDH DS218+ Deploy GitLab”
- Deploy Harbor, refer to “CDH DS218+ Deploy Harbor(1.10.3)”
- Deploy the Helm. For details, see Deploying and Experiencing the Helm(Version 2.16.1).
Start combat after the preparation is completed;
Deploy minio
Minio is deployed as an independent service and I will deploy it on the server with Docker: 192.168.50.43
- Prepare two directories on the host to store miniO configuration and files respectively. Run the following command:
mkdir -p /var/services/homes/zq2599/minio/gitlab_runner \ && chmod -R 777 /var/services/homes/zq2599/minio/gitlab_runner \ && mkdir -p /var/services/homes/zq2599/minio/config \ && chmod -R 777 /var/services/homes/zq2599/minio/configCopy the code
- Create miniO service with port 9000 and access key(minimum three bits) and secret key(minimum eight bits) :
sudo docker run -p 9000:9000 --name minio \
-d --restart=always \
-e "MINIO_ACCESS_KEY=access" \
-e "MINIO_SECRET_KEY=secret123456" \
-v /var/services/homes/zq2599/minio/gitlab_runner:/gitlab_runner \
-v /var/services/homes/zq2599/minio/config:/root/.minio \
minio/minio server /gitlab_runner
Copy the code
- Enter the Access key and secret key to log in successfully.
4. Click the icon in the red box as shown below to create a bucket named Runner:5. Now that miniO is ready, deploy GitLab Runner in the Kubernetes environment;
Type of GitLab Runner
From the perspective of users, GitLab Runner can be divided into shared and specific types:
- If you want to create a GitLab Runner to be used by all GitLab repositories, create a shared type;
- If your GitLab Runner is only used for a fixed GitLab repository, create specific types;
In today’s practice, we will create specific types, that is, first have the GitLab repository, and then create the runner dedicated to that repository, so please prepare the GitLab repository in advance;
Prepare GitLab configuration information (specific)
Before deploying GitLab Runner, you need to prepare two key configuration information so that GitLab Runner can successfully connect to GitLab once it starts:
- Go to GitLab, open the repository for CI, and go to Settings -> CI/CD -> Runners -> Expand:
2. In the picture below, red box 1 is Gitlab URL, red box 2 is registration token, remember these two parameters, we will use them later:
Prepare GitLab configuration information (shared)
We will not create a shared runner. If you want to create a shared runner, just prepare the following information and create a runner that can be used by all repositories:
- Log in to GitLab as an administrator;
- Obtain the GitLab URL and registration Token as shown in the red box below:
Deploy RitLab Runner
- Make sure that you can use the kubectl command to perform normal operations in Kubernetes.
- Create namespace named gitlab-runner:
kubectl create namespace gitlab-runner
Copy the code
- Create a secret file and store the minio access key and secret key in it.
kubectl create secret generic s3access \
--from-literal=accesskey="access" \
--from-literal=secretkey="secret123456" -n gitlab-runner
Copy the code
- Before deploying GitLab Runner with HELM, add chart’s repository to the helm’s repository list:
helm repo add gitlab https://charts.gitlab.io
Copy the code
- Download GitLab Runner’s chart:
helm fetch gitlab/gitlab-runner
Copy the code
- Gitlab-runner 0.18.0.tgz: gitlab-runner 0.18.0.tgz: gitlab-runner 0.18.0.tgz
The tar - ZXVF gitlab - runner - 0.18.0. TGZCopy the code
- Unzip the folder named Gitlab-Runner, as shown below, and then modify the three files in it:
8. Open values. Yaml, there are four changes that need to be made: In the first place, find the annotated location of gitlabUrl parameter and add the configuration of gitlabUrl, whose value is the GitLab URL parameter obtained from GitLab web page, as shown in the red box below:
2. Add the “runnerRegistrationToken” parameter to the “GitLab” file, and add the “registrationToken” parameter to the “GitLab” file.
ClusterWideAccess = true; create RBAC = true; gitlab-bastion = true;
GitLab Runner = k8S; GitLab Runner = k8S; GitLab Runner = k8S
Before modifying the cache configuration, you can see the null curly braces and the rest of the information is commented:
14. The modified cache configuration is as follows: the original curly braces in red box 1 are removed, the comment symbol is removed in red box 2, the access address of miniO is filled in in red box 3, the comment symbol is removed in red box 4, and the content remains unchanged:
15. The parameter s3CacheInsecure in red box 4 in the figure above is false, which means that the request to MiniO is HTTP (if it is true, it is HTTPS). However, it has been proved that this configuration is invalid in chart of the current version. The solution to this problem is to modify the _cache.tpl file in the templates directory, open it and look for the following red box:
16. Replace the lines in the red box above with those in the red box below, removing the old if judgment and end lines and assigning CACHE_S3_INSECURE directly:17. The next thing to change is the templates/configmap.yaml file, which maps the host docker sock to runner Executor, so that the docker commands in the job will be sent to the host Docker daemon. To do this from the host machine, go to templates/configmap.yaml and go to the image below, where we’ll add something between red box 1 and red box 2:
18. Add the following between red box 1 and red box 2 above:
cat >>/home/gitlab-runner/.gitlab-runner/config.toml <<EOF
[[runners.kubernetes.volumes.host_path]]
name = "docker"
mount_path = "/var/run/docker.sock"
read_only = true
host_path = "/var/run/docker.sock"
EOF
Copy the code
- After adding the above content, the overall effect is as follows, and the new content is shown in the red box:
20. Go back to values. Yam and run the following command to create GitLab Runner:
helm install \
--name-template gitlab-runner \
-f values.yaml . \
--namespace gitlab-runner
Copy the code
- Check whether the POD is normal:
22. No abnormality was found in the POD log:
23. Back to the Runner page of GitLab, it can be seen that a new runner is added:
At this point, the entire GitLab CI environment has been deployed, the next simple verification environment is OK;
validation
- In the GitLab repository, add a file named.gitlab-ci.yml with the following contents:
# set the execution mirror
image: busybox:latest
The whole pipeline has two stages
stages:
- build
- test
The cache key comes from the branch information, and the cache location is vendor folder
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- vendor/
before_script:
- echo "Before script section"
after_script:
- echo "After script section"
build1:
stage: build
tags:
- k8s
script:
- echo "Write content to cache"
- echo "build" > vendor/hello.txt
test1:
stage: test
script:
- echo "Read from cache"
- cat vendor/hello.txt
Copy the code
- Submit the script to GitLab as shown in the following figure. The pipeline will fire and be in a pending state because it is waiting for runner to create an Executor pod:
3. Click open to view the result:
4. Click on the build1 icon to see the output of this job:
5. Click on the icon of test1, and the corresponding console output shows that the data written by the previous job is read successfully:
So far, GitLab Runner has been successfully deployed and run in Kubernetes environment. In the next article, we will build SpringBoot application into docker image and push it to Harbor.
You are not alone, Xinchen original accompany all the way
- Java series
- Spring series
- The Docker series
- Kubernetes series
- Database + middleware series
- The conversation series
Welcome to pay attention to the public number: programmer Xin Chen
Wechat search “programmer Xin Chen”, I am Xin Chen, looking forward to enjoying the Java world with you…
Github.com/zq2599/blog…