When it comes to kubernete-based CI/CD, there are a lot of tools that can be used, such as Jenkins, Gitlab CI and the emerging drone. Here we will use the most familiar Jenkins to do CI/CD tools. This tutorial is based on K8S 1.16.1.
Create a PVC
The full name for PVC is: PersistentVolumeClaim, a PersistentVolumeClaim, is a statement that is stored by the user. PVCS are similar to PODS that consume nodes and PVCS that consume PVCS. PVC can request specific storage space and access patterns. For real storage users, you don’t need to worry about the underlying storage implementation details, just use PVC directly.
But through the PVC must request to the storage space is also likely enough to meet the diverse needs of application for a storage device, and different applications may have different to the requirement of storage performance, such as reading and writing speed, concurrent performance, etc., in order to solve this problem, Kubernetes and we have introduced a new resource for the object: StorageClass. Through the definition of StorageClass, administrators can define storage resources as certain types of resources, such as fast storage, slow storage, etc. Users can intuitively know the specific features of various storage resources according to the description of StorageClass. In this way, you can apply for proper storage resources based on application features.
NFS
For the convenience of demonstration, we decided to use the relatively simple NFS storage resource. Next, we will install the NFS service on node 192.168.56.100, data directory: /data/k8s/
-
Disabling the Firewall
$ systemctl stop firewalld.service $ systemctl disable firewalld.service Copy the code
-
Installing and Configuring NFS
$ yum -y install nfs-utils rpcbind Copy the code
Setting permissions for shared directories:
$ mkdir -p /data/k8s/
$ chmod 755 /data/k8s/
Copy the code
The default NFS configuration file is in /etc/exports. Add the following configuration information to this file:
$ vi /etc/exports
/data/k8s *(rw,sync,no_root_squash)
Copy the code
Configuration description:
- /data/k8s: indicates the shared data directory
*
: indicates that anyone can connect to a network segment, an IP address, or a domain name- Rw: read and write permission
- Sync: files are written to the hard disk and memory at the same time
- No_root_squash: If root logs in to an NFS host and uses a shared directory, its permission is changed to an anonymous user. Generally, its UID and GID are changed to nobody
Of course, there are many NFS configurations, interested students can go to the Internet to find.
- To start the service, NFS must be registered with the RPC. Once the RPC is restarted, all the registered files are lost, and all the services registered with the RPC must be restarted
Note the order in which you start rpcBind
$ systemctl start rpcbind.service
$ systemctl enableRpcbind $systemctl status rpcbind ● rpcbind. Service -rpcbind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; disabled; vendor preset: enabled)
Active: active (running) since Tue 2018-07-10 20:57:29 CST; 1min 54s ago
Process: 17696 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS(code=exited, status=0/SUCCESS) Main PID: 17697 (rpcbind) Tasks: 1 Memory: 1.1m CGroup: / system. Slice/rpcbind. Service └ ─ 17697 / sbin/rpcbind - w Jul 10 20:57:29 master systemd [1] : Starting the RPCbind service...
Jul 10 20:57:29 master systemd[1]: Started RPC bind service.
Copy the code
Seeing Started above proves that it Started successfully.
Then start the NFS service:
$ systemctl start nfs.service
$ systemctl enableNFS $systemctl status NFS ● nfs-server.service -nfs server and services Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; Vendor preset: disabled) Drop - In: / run/systemd/generator/NFS - server service. D └ ─ the order - with - mounts. Conf Active: active (exited) since Tue 2018-07-10 21:35:37 CST; 14s ago Main PID: 32067 (code=exited, status=0/SUCCESS) CGroup: /system.slice/nfs-server.service Jul 10 21:35:37 master systemd[1]: Starting NFS server and services... Jul 10 21:35:37 master systemd[1]: Started NFS server and services.Copy the code
If you also see Started, the NFS Server is successfully Started.
In addition, we can also confirm by the following command:
$ rpcinfo -p|grep nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 nfs_acl 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 3 udp 2049 nfs_aclCopy the code
View the mount permission of a specific directory:
$ cat /var/lib/nfs/etab /data/k8s *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=6 5534,anongid=65534,sec=sys,secure,no_root_squash,no_all_squash)Copy the code
Now that we have the NFS Server installed successfully, let’s verify NFS by installing the NFS client on node 10.151.30.62(your IP)
- Before installing NFS, disable the firewall:
$ systemctl stop firewalld.service $ systemctl disable firewalld.service Copy the code
Then install NFS
$ yum -y install nfs-utils rpcbind
Copy the code
Once the installation is complete, start RPC and then NFS as above:
$ systemctl start rpcbind.service
$ systemctl enable rpcbind.service
$ systemctl start nfs.service
$ systemctl enable nfs.service
Copy the code
- After the client is started, we mount NFS on the client to test:
Check whether the NFS share directory exists:
$ showmount -e 192.168.56.100
Export list for192.168.56.100: / data/k8s *Copy the code
Then we create a new directory on the client:
$ mkdir -p /root/course/kubeadm/data
Copy the code
Mount the NFS shared directory to the above directory:
$ mount -t nfs 192.168.56.100:/data/k8s /root/course/kubeadm/data
Copy the code
After the mount is successful, create a file in the directory on the client. Then check whether the file also appears in the shared directory on the NFS server:
$ touch /root/course/kubeadm/data/test.txt
Copy the code
Then on the NFS server view:
$ ls -ls /data/k8s/
total 4
4 -rw-r--r--. 1 root root 4 Jul 10 21:50 test.txt
Copy the code
If the test.txt file appears above, then we have successfully mounted NFS.
PV
With NFS shared storage above, we can now use PV and PVC. PV, as a storage resource, mainly includes key information such as storage capacity, access mode, storage type, and recycling policy. Here we create a PV object that uses NFS back-end storage, 1G storage space, ReadWriteOnce access mode, Recyle recycle policy. The corresponding YAML file is as follows :(pv1-demo.yaml)
apiVersion: v1 kind: PersistentVolume metadata: name: pv1 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle NFS: path: / data/k8s server: 192.168.56.100Copy the code
Kubernetes supports many PV types, such as the common Ceph, GlusterFs, NFS, and even HostPath. However, HostPath can only be used for standalone testing. More support types can go to Kubernetes PV official documentation for viewing, because each storage type has its own characteristics, so we can use the time to view the corresponding documentation to set the corresponding parameters.
Then again, use Kubectl to create it directly:
$ kubectl create -f pv1-demo.yaml
persistentvolume "pv1" created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 1Gi RWO Recycle Available 12s
Copy the code
We can see that PV1 has been created successfully and the state is Available, indicating that PV1 is ready and can be applied by PVC. Let’s take a look at each of the above properties.
Install and deploy Jenkins based on K8s
Most of the students in our course have heard of Jenkins more or less, so we won’t go into details about what Jenkins is. We will go straight to the main topic. Later we will talk about Jenkins’ learning course separately, and those who want to learn more deeply can also pay attention to it. Install Jenkins in Kubernetes cluster, create a new Deployment (jenkins2.yaml)
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins2
namespace: kube-ops
spec:
selector:
matchLabels:
app: jenkins2
template:
metadata:
labels:
app: jenkins2
spec:
terminationGracePeriodSeconds: 10
serviceAccount: jenkins2
containers:
- name: jenkins
image: jenkins/jenkins:lts
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: web
protocol: TCP
- containerPort: 50000
name: agent
protocol: TCP
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
readinessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
volumeMounts:
- name: jenkinshome
subPath: jenkins2
mountPath: /var/jenkins_home
securityContext:
fsGroup: 1000
volumes:
- name: jenkinshome
persistentVolumeClaim:
claimName: opspvc
---
apiVersion: v1
kind: Service
metadata:
name: jenkins2
namespace: kube-ops
labels:
app: jenkins2
spec:
selector:
app: jenkins2
type: NodePort
ports:
- name: web
port: 8080
targetPort: web
nodePort: 30002
- name: agent
port: 50000
targetPort: agent
Copy the code
For demonstration purposes, we put all object resources in this lesson under a namespace named kube-ops, so we need to add create a namespace:
$ kubectl create namespace kube-ops
Copy the code
Here we use an image named Jenkins/Jenkins: LTS, which is the official Jenkins Docker image, and there are also some environment variables. Of course, we can also customize an image according to our own needs. For example, we can package some plug-ins into a custom image. You can refer to the documentation: github.com/jenkinsci/d… Another thing to note is that we have mounted the /var/jenkins_home directory of the container to a PVC object named opspvc, so we also need to create a corresponding PVC object ahead of time. Of course, we can also use our previous StorageClass object to create automatically :(pvc.yaml)
apiVersion: v1 kind: PersistentVolume metadata: name: opspv spec: capacity: storage: 20Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete the NFS server: 192.168.56.100 path: / data/k8s - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: opspvc namespace: kube-ops spec: accessModes: - ReadWriteMany resources: requests: storage: 20GiCopy the code
Create the PVC object you want to use:
$ kubectl create -f pvc.yaml
Copy the code
We also need to use a serviceAccount with related permissions: Jenkins2, we’ve just given Jenkins the necessary permissions. Of course, if you’re not familiar with serviceAccount permissions, if Jenkins needs to dynamically publish to different namespaces and subjects bind to different namespaces, We can bind the sa to a cluster role of cluster-admin. Of course, this has certain security risks :(rbac.yaml)
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins2
namespace: kube-ops
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins2
rules:
- apiGroups: ["extensions"."apps"]
resources: ["deployments"]
verbs: ["create"."delete"."get"."list"."watch"."patch"."update"]
- apiGroups: [""]
resources: ["services"]
verbs: ["create"."delete"."get"."list"."watch"."patch"."update"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["create"."delete"."get"."list"."patch"."update"."watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"."delete"."get"."list"."patch"."update"."watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"."list"."watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: jenkins2
namespace: kube-ops
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins2
subjects:
- kind: ServiceAccount
name: jenkins2
namespace: kube-ops
- kind: ServiceAccount
name: jenkins2
namespace: default
Copy the code
Create an RBAC-related resource object:
$ kubectl create -f rbac.yaml
serviceaccount "jenkins2" created
role.rbac.authorization.k8s.io "jenkins2" created
rolebinding.rbac.authorization.k8s.io "jenkins2" created
Copy the code
Finally, for the convenience of our test, we expose Jenkins’ Web service in the form of NodePort, which is fixed at 30002 port. In addition, we need to expose another agent’s port. This port is mainly used for communication between Jenkins’ master and slave.
With all the prepared resources in place, we directly create the Jenkins service:
$ chown -R 1000 /data/k8s/jenkins2
$ kubectl create -f jenkins2.yaml
deployment.extensions "jenkins2" created
service "jenkins2" created
Copy the code
Once created, it may take a while to pull the image, and then check the Pod’s status:
$ kubectl get pods -n kube-ops
NAME READY STATUS RESTARTS AGE
jenkins2-7f5494cd44-pqpzs 0/1 Running 0 2m
Copy the code
We can see that the Pod is in the Running state, but the READY value is 0. Then we use the describe command to check the details of the Pod:
$ kubectl describe pod jenkins2-7f5494cd44-pqpzs -n kube-ops
...
Normal Created 3m kubelet, node01 Created container
Normal Started 3m kubelet, node01 Started container
Warning Unhealthy 1m (x10 over 2m) kubelet, node01 Liveness probe failed: Get http://10.244.1.165:8080/login: dial tcp 10.244.1.165:8080: getsockopt: connection refused
Warning Unhealthy 1m (x10 over 2m) kubelet, node01 Readiness probe failed: Get http://10.244.1.165:8080/login: dial tcp 10.244.1.165:8080: getsockopt: connection refused
Copy the code
You can see the above Warning information, the health check failed, what is the specific cause? You can learn more by looking at the logs:
$ kubectl logs -f jenkins2-7f5494cd44-pqpzs -n kube-ops
touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
Copy the code
We do not have permission to create files under Jenkins’ home directory. This is because the default image uses Jenkins as user, while the shared data directory we mounted to the NFS server via PVC is root user. So I don’t have permission to access this directory. To solve this problem, I just need to reassign our directory permission under the NFS shared data directory:
$ chown -R 1000 /data/k8s/jenkins2
Copy the code
Another option is to create a custom image, in which you can specify user root
Then we will recreate it:
$ kubectl delete -f jenkins.yaml
deployment.extensions "jenkins2" deleted
service "jenkins2" deleted
$ kubectl create -f jenkins.yaml
deployment.extensions "jenkins2" created
service "jenkins2" created
Copy the code
Now if we look at the newly generated Pod, there is no error message:
$ kubectl get pods -n kube-ops
NAME READY STATUS RESTARTS AGE
jenkins2-7f5494cd44-smn2r 1/1 Running 0 25s
Copy the code
After the service is successfully started, we can access Jenkins service according to the IP address of any node: port 30002. You can install and configure Jenkins service according to the prompts:
1. Enter Jenkins System Management 2. Enter Plug-in Management 3
CD {your Jenkins working directory}/updates # go to the update configuration location
Modify the default.json file
sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json && sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json
Copy the code
The initialized password can be viewed in Jenkins’ container log or directly in NFS’s shared data directory:
$ cat /data/k8s/jenkins2/secrets/initAdminPassword
Copy the code
Then choose to install the recommended plug-ins.
After installation, add administrator account to enter Jenkins main interface:
advantages
Now that Jenkins has been installed, let’s not rush to use it. We need to understand the benefits of using Jenkins in the Kubernetes environment.
We know that continuous construction and release is an essential step in our daily work. At present, most companies adopt Jenkins cluster to build CI/CD processes that meet the requirements. However, there are some pain points in the traditional Jenkins Slave mode, such as:
- When the Master Master has a single point of failure, the entire process becomes unavailable
- Each Slave is configured in a different environment to perform operations such as compilation and packaging in different languages, but these different configurations make it very difficult to manage and maintain
- Resource allocation is unbalanced. Jobs to be run on some slaves are queued, while others are idle
- Resources are wasted. Each Slave may be a physical machine or a VIRTUAL machine. When the Slave is idle, resources are not released completely.
Because of the above pain points, we long for a more efficient and reliable way to complete the CI/CD process, and Docker virtualization container technology can well solve this pain point, especially in the Kubernetes cluster environment can better solve the above problems. The following is a simple schematic diagram of building Jenkins cluster based on Kubernetes:
As can be seen from the figure, Jenkins Master and Jenkins Slave run on nodes of Kubernetes cluster in the form of Pod. Master runs on one of the nodes and stores its configuration data to a Volume. The Slave runs on each node and is not always running. It is dynamically created and automatically deleted on demand.
The working process of this approach is roughly as follows: When the Jenkins Master receives the Build request, it will dynamically create a Jenkins Slave running in Pod according to the configured Label and register it with the Master. After running the Job, The Slave is deregistered and the Pod is automatically deleted, returning to its original state.
So what are the benefits of using this approach?
- The service is highly available. When Jenkins Master fails, Kubernetes will automatically create a new Jenkins Master container and allocate Volume to the newly created container to ensure that data is not lost, so as to achieve high availability of cluster service.
- Every time a Job is run, a Jenkins Slave will be automatically created. When the Job is completed, the Slave will automatically log out and delete the container, and the resources will be automatically released. Kubernetes will make use of the resources according to the usage of each resource. The slaves are dynamically allocated to idle nodes to reduce the queuing of a node that has high resource utilization.
- Good scalability. When Kubernetes cluster resources are seriously insufficient and jobs queue, you can easily add a Kubernetes Node to the cluster to achieve expansion.
The Kubernetes cluster environment has eliminated all of the problems we faced before. It looks perfect.
configuration
Next we need to configure Jenkins to dynamically generate Slave pods.
Step 1. We need to installkubernetes pluginGo to Manage Jenkins -> Manage Plugins -> Available -> Kubernetes Plugin and select Install.
Step 2. Once installed, go to Manage Jenkins — > Configure System — > Add a New Cloud — > Select Kubernetes, Then fill in the Kubernetes and Jenkins configuration information.
Note namespace, we say kube-ops, then click Test Connection, If the “Connection test successful” message is displayed, Jenkins can communicate with Kubernetes system normally. Jenkins2. Kube – ops. SVC. Cluster. The local: 8080, here’s the format is: Service name. The namespace. SVC. Cluster. The local: 8080, according to the above create Jenkins service name fill in, this is before I create called Jenkins, if is to use the above we create jenkins2 it should be
In addition, if the Test Connection fails, there is probably a permission problem, so we need to add the secret corresponding to Jenkins serviceAccount to the Credentials.
Step 3: Configure Pod Template, in fact, configure Pod Template for Jenkins Slave to run. We also use kube-ops, Labels as namespaces. This is also very important. Then we use cnYCH/Jenkins: JNLP as the base, adding maven, Kubectl, and other useful tools.
Dockerfile
FROM cnych/jenkins:jnlp6
COPY maven.zip /usr/local/
COPY kubectl /usr/bin/
RUN \
cd /usr/local/ && \
unzip -qo maven.zip && \
chmod -R 777 /usr/local/maven && \
chmod +x /usr/bin/kubectl && \
rm -rf /usr/local/bin/kubectl && \
ln -s /usr/local/maven/bin/mvn /usr/bin/mvn
Copy the code
Prepare a complete package for Maven and modify setting.xml to mount this path
<localRepository>/.m2/repository</localRepository>Copy the code
Modify the mirror address as follows:
<mirror>
<id>alimaven</id>
<name>aliyun maven</name>
<url>http://maven.aliyun.com/nexus/content/groups/public/</url>
<mirrorOf>central</mirrorOf>
</mirror>Copy the code
Copy kubectl to /opt/ Jenkins /maven
cp /usr/bin/kubectl /opt/jenkins/mavenCopy the code
The directory structure is as follows:
[root@master maven]# tree.├ ─ Dockerfile ├─ kubectl ├─ maven.zip 0 directories, 3 filesCopy the code
Package image:
docker build -t yourtag/jenkins:jnlp6 .
Copy the code
Docker image: yourtag/ Jenkins: jnLp6
If you are using a Jenkins version later than 2.176.x, please replace the image with yourtag/ Jenkins :jnlp6. Otherwise, an error will be reported.
Sock is used for Pod containers to share the docker of the host. This is how we say docker in docker. The Docker binaries are already packed in the image above, in another directory, /root/.kube, We will mount this directory under the /root/.kube directory of the container so that we can use the Kubectl tool to access our Kubernetes cluster in the Pod container, so that we can deploy the Kubernetes application in Slave Pod later.
Mount the path
/var/run/docker.sock
/root/.kube
/usr/bin/docker
/usr/bin/kubectl
# maven packages
/root/.m2/repository # Host path
/.m2/repository # container pathCopy the code
There are a few more parameters to note, as shown belowTime in minutes to retain slave when idleThis parameter indicates how long to keep Slave pods in the idle state. It is best to keep the default value. If you set this parameter too high, the Slave pods will not be destroyed immediately after the Job is completed.
In addition, some students had permission problems when running Slave Pod after the configuration. Because Jenkins Slave Pod does not have permission configuration, ServiceAccount needs to be configured. Click “Advanced” below the configuration of Slave Pod. Add the corresponding ServiceAccount:
Some students found that when Jenkins Slave Pod was started after the configuration was completed, the Slave Pod could not be connected. After 100 attempts to connect, the Pod would be destroyed, and then another Slave Pod would be created to continue the connection with an infinite loop, similar to the following information:
If this is the case, you need to clear the run command and parameter values from the Slave Pod
At this point our Kubernetes Plugin is configured.
test
Now that the Kubernetes plugin is configured, let’s add a Job to see if it can be executed in Slave Pod, and then see if the Pod is destroyed.
Click on the Jenkins homepagecreate new jobs, create a test task, enter the task name, and then we select a task of type Freestyle Project:
Note that Label Expression is entered belowhaimaxy-jnlpIs the Label in the Slave Pod we configured earlier. These two places must be consistent
Then scroll down and select in the Build areaExecute shell
Then enter our test command
echo "Test Kubernetes to dynamically generate Jenkins slave"
echo "==============docker in docker==========="
docker info
echo "=============maven============="
mvn -version
echo "=============kubectl============="
kubectl get pods -n kube-ops
Copy the code
And then SAVE
Now click Build Now on the page to trigger the Build and watch the changes to the Pod in the Kubernetes cluster
$ kubectl get pods -n kube-ops
NAME READY STATUS RESTARTS AGE
jenkins2-7c85b6f4bd-rfqgv 1/1 Running 3 1d
jnlp-hfmvd 0/1 ContainerCreating 0 7s
Copy the code
We can see that when we hit Build now we can see that a new Pod: JNLP-HFMVD is created, which is our Jenkins Slave. After the task is executed, we can see the task information. For example, we spent 5.2s on the Slave JNLP-HFMVD
You can also view the corresponding console information:
At this point, we can verify that the task is completed. Then we can check the Pod list of the cluster and find that the previous Slave Pod is no longer under the namespace kube-ops.
$ kubectl get pods -n kube-ops
NAME READY STATUS RESTARTS AGE
jenkins2-7c85b6f4bd-rfqgv 1/1 Running 3 1d
Copy the code
At this point we have completed the method of using Kubernetes to dynamically generate Jenkins Slave. Next time we’ll show you how to publish our Kubernetes application in Jenkins.