Writing in the front

Docker+K8S+GitLab/SVN+Jenkins+Harbor Continuous Integration Delivery Environment (I)

K8S installs the GitLab code repository

Note: On the Master node (executed on the Binghe101 server)

1. Create the K8S-OPS namespace

Create the k8s-OPs-namespace. yaml file as shown in the following figure.

apiVersion: v1
kind: Namespace
metadata:
  name: k8s-ops
  labels:
    name: k8s-ops
Copy the code

Run the following command to create a namespace.

kubectl apply -f k8s-ops-namespace.yaml 
Copy the code

2. Install gitlab – redis

Create the gitlab-redis.yaml file with the contents shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: k8s-ops
  labels:
    name: redis
spec:
  selector:
    matchLabels:
      name: redis
  template:
    metadata:
      name: redis
      labels:
        name: redis
    spec:
      containers:
      - name: redis
        image: sameersbn/redis
        imagePullPolicy: IfNotPresent
        ports:
        - name: redis
          containerPort: 6379
        volumeMounts:
        - mountPath: /var/lib/redis
          name: data
        livenessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 10
          timeoutSeconds: 5
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/redis

---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: k8s-ops
  labels:
    name: redis
spec:
  ports:
    - name: redis
      port: 6379
      targetPort: redis
  selector:
    name: redis
Copy the code

First of all, on the command line to perform the following commands to create/data1 / docker/xinsrv/redis directory.

mkdir -p /data1/docker/xinsrv/redis
Copy the code

Run the following command to install gitlab-redis.

kubectl apply -f gitlab-redis.yaml 
Copy the code

3. Install gitlab – postgresql

Create gitlab-postgresql.yaml as shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql
  namespace: k8s-ops
  labels:
    name: postgresql
spec:
  selector:
    matchLabels:
      name: postgresql
  template:
    metadata:
      name: postgresql
      labels:
        name: postgresql
    spec:
      containers:
      - name: postgresql
        image: sameersbn/postgresql
        imagePullPolicy: IfNotPresent
        env:
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: DB_EXTENSION
          value: pg_trgm
        ports:
        - name: postgres
          containerPort: 5432
        volumeMounts:
        - mountPath: /var/lib/postgresql
          name: data
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -h
            - localhost
            - -U
            - postgres
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/postgresql
---
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  namespace: k8s-ops
  labels:
    name: postgresql
spec:
  ports:
    - name: postgres
      port: 5432
      targetPort: postgres
  selector:
    name: postgresql
Copy the code

First, perform the following commands to create/data1 / docker/xinsrv/postgresql directory.

mkdir -p /data1/docker/xinsrv/postgresql
Copy the code

Next, install Gitlab-PostgresQL as shown below.

kubectl apply -f gitlab-postgresql.yaml
Copy the code

4. Install gitlab

(1) Configure the user name and password

First, the base64 encoding is used on the command line to transcode the user name and password. In this example, the user name and password are admin and admin.1231 respectively

The transcoding situation is as follows.

[root@binghe101 k8s]# echo -n 'admin' | base64 
YWRtaW4=
[root@binghe101 k8s]# echo -n 'admin.1231' | base64 
YWRtaW4uMTIzMQ==
Copy the code

The transcoding user name is YWRtaW4= and the password is YWRtaW4uMTIzMQ==

You can also decode base64 encoded strings, for example, password strings, as shown below.

[root@binghe101 k8s]# echo 'YWRtaW4uMTIzMQ==' | base64 --decode 
admin.1231
Copy the code

Next, create the secret-gitlab.yaml file, mainly for the user to configure the gitlab user name and password, as shown below.

apiVersion: v1
kind: Secret
metadata:
  namespace: k8s-ops
  name: git-user-pass
type: Opaque
data:
  username: YWRtaW4=
  password: YWRtaW4uMTIzMQ==
Copy the code

Execute the contents of the configuration file, as shown below.

kubectl create -f ./secret-gitlab.yaml
Copy the code

(2) Install GitLab

Create the gitlab.yaml file with the contents shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab
  namespace: k8s-ops
  labels:
    name: gitlab
spec:
  selector:
    matchLabels:
      name: gitlab
  template:
    metadata:
      name: gitlab
      labels:
        name: gitlab
    spec:
      containers:
      - name: gitlab
        image: Sameersbn/gitlab: 12.1.6
        imagePullPolicy: IfNotPresent
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: GITLAB_TIMEZONE
          value: Beijing
        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_SECRET_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_SECRETS_OTP_KEY_BASE
          value: long-and-random-alpha-numeric-string
        - name: GITLAB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: git-user-pass
              key: password
        - name: GITLAB_ROOT_EMAIL
          value: 12345678@qq.com
        - name: GITLAB_HOST
          value: gitlab.binghe.com
        - name: GITLAB_PORT
          value: "80"
        - name: GITLAB_SSH_PORT
          value: "30022"
        - name: GITLAB_NOTIFY_ON_BROKEN_BUILDS
          value: "true"
        - name: GITLAB_NOTIFY_PUSHER
          value: "false"
        - name: GITLAB_BACKUP_SCHEDULE
          value: daily
        - name: GITLAB_BACKUP_TIME
          value: 0100,
        - name: DB_TYPE
          value: postgres
        - name: DB_HOST
          value: postgresql
        - name: DB_PORT
          value: "5432"
        - name: DB_USER
          value: gitlab
        - name: DB_PASS
          value: passw0rd
        - name: DB_NAME
          value: gitlab_production
        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"
        ports:
        - name: http
          containerPort: 80
        - name: ssh
          containerPort: 22
        volumeMounts:
        - mountPath: /home/git/data
          name: data
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 180
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          timeoutSeconds: 1
      volumes:
      - name: data
        hostPath:
          path: /data1/docker/xinsrv/gitlab
---
apiVersion: v1
kind: Service
metadata:
  name: gitlab
  namespace: k8s-ops
  labels:
    name: gitlab
spec:
  ports:
    - name: http
      port: 80
      nodePort: 30088
    - name: ssh
      port: 22
      targetPort: ssh
      nodePort: 30022
  type: NodePort
  selector:
    name: gitlab

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gitlab
  namespace: k8s-ops
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: gitlab.binghe.com
    http:
      paths:
      - backend:
          serviceName: gitlab
          servicePort: http
Copy the code

Note: During the configuration of GitLab, the host name or domain name should be used instead of IP address when listening to the host. In the above configuration, I used the host name of gitlab.binghe.com.

At the command line to perform the following commands to create/data1 / docker/xinsrv/gitlab directory.

mkdir -p /data1/docker/xinsrv/gitlab
Copy the code

Install GitLab as shown below.

kubectl apply -f gitlab.yaml
Copy the code

5. The installation is complete

Check the k8S-OPS namespace deployment, as shown in the following figure.

[root@binghe101 k8s]# kubectl get pod -n k8s-ops
NAME                          READY   STATUS    RESTARTS   AGE
gitlab-7b459db47c-5vk6t       0/1     Running   0          11s
postgresql-79567459d7-x52vx   1/1     Running   0          30m
redis-67f4cdc96c-h5ckz        1/1     Running   1          10h
Copy the code

You can also run the following command to view information.

[root@binghe101 k8s]# kubectl get pod --namespace=k8s-ops
NAME                          READY   STATUS    RESTARTS   AGE
gitlab-7b459db47c-5vk6t       0/1     Running   0          36s
postgresql-79567459d7-x52vx   1/1     Running   0          30m
redis-67f4cdc96c-h5ckz        1/1     Running   1          10h
Copy the code

The effect is the same.

Next, look at the port mappings for GitLab, as shown below.

[root@binghe101 k8s]# kubectl get svc -n k8s-opsThe NAME TYPE CLUSTER - EXTERNAL IP - the IP PORT (S) AGE gitlab NodePort 10.96.153.100 < none > 80:30088 / TCP, 2 m42s 22:30, 022 / TCP Postgresql ClusterIP 10.96.203.119 < None > 5432/TCP 32m Redis ClusterIP 10.96.107.150 < None > 6379/TCP 10hCopy the code

At this point, you can see that GitLab is accessible through the hostname gitlab.binghe.com and port 30088 of the Master node (binghe101). Because I use a VM to set up an environment, you need to configure the hosts file of the local host when accessing gitlab.binghe.com mapped from the VM. Add the following configuration items to the hosts file of the local host.

192.168.175.101 gitlab.binghe.com
Copy the code

Note: On Windows, the hosts file is stored in the following directory:

C:\Windows\System32\drivers\etc
Copy the code

Next, you can access GitLab in your browser with the link: gitlab.binghe.com:30088, as shown below.

At this point, you can log in to GitLab using user name root and password admin.1231.

Note: The user name here is root, not admin, because root is the default superuser for GitLab.

The following page is displayed after login.

At this point, the K8S installation of GitLab is complete.

Install Harbor private warehouse

Note: The Harbor private repository will be installed on the Master node (Binghe101 server). In the actual production environment, it is recommended to install it on other servers.

1. Download the offline version of Harbor

Wget HTTP: / / https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgzCopy the code

2. Decompress the Harbor installation package

Tar - ZXVF harbor - offline installer - v1.10.2. TGZCopy the code

After the decompression is successful, a Harbor directory is generated in the current directory of the server.

3. The configuration of Harbor

Note: HERE, I changed the Harbor port to 1180. If you do not change the Harbor port, the default port is 80.

(1) Modify the harbor. Yml file

cd harbor
vim harbor.yml
Copy the code

The following describes the modified configuration items.

Hostname: 192.168.175.101 HTTP: port: 1180 harbor_admin_password: binghe123ERROR:root: ERROR: The protocol is HTTPS but attribute ssl_cert is not set
#https:
  #port: 443
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path
Copy the code

(2) Modify daemon.json files

Modify /etc/docker-daemon. json file, if not, create it, add the following content to /etc/docker-daemon. json file:

[root@binghe~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"]."insecure-registries": ["192.168.175.101:1180"]}Copy the code

You can also run the IP addr command on the server to view all IP address segments of the host and add them to the /etc/docker-daemon. json file. Here, the contents of my configured file look like this.

{
    "registry-mirrors": ["https://zz3sblpi.mirror.aliyuncs.com"]."insecure-registries": ["192.168.175.0/16"."172.17.0.0/16"."172.18.0.0/16"."172.16.29.0/16"."192.168.175.101:1180"]}Copy the code

4. Install and start harbor

After the configuration is complete, enter the following commands to install and start the Harbor

[root@binghe harbor]# ./install.sh 
Copy the code

5. Log in to Harbor and add an account

After the success of the installation, in the browser address bar type http://192.168.175.101:1180 to open the link, as shown in the figure below.

Enter the user name admin and password binghe123 to log in to the system, as shown in the following figure

Next, we choose user management and add an administrator account to prepare for the subsequent packaging and uploading of Docker images. The steps for adding an account are as follows.

The password is Binghe123.

Click OK, and the following will appear.

At this time, account Binghe is not the administrator, select the binghe account and click “Set as administrator”.

At this point, the Binghe account is set as the administrator. At this point, the Harbor installation is complete.

6. Modify the Harbor port

If you need to modify the Harbor port after installing Harbor, you can follow the following steps to modify the Harbor port. Here, I take changing port 80 to port 1180 as an example

(1) Modify the harbor. Yml file

cd harbor
vim harbor.yml
Copy the code

The following describes the modified configuration items.

Hostname: 192.168.175.101 HTTP: port: 1180 harbor_admin_password: binghe123ERROR:root: ERROR: The protocol is HTTPS but attribute ssl_cert is not set
#https:
  #port: 443
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path
Copy the code

(2) Modify docker-comemage. yml file

vim docker-compose.yml
Copy the code

The following describes the modified configuration items.

ports:
      - 1180:80
Copy the code

(3) Modify the config.yml file

cd common/config/registry
vim config.yml
Copy the code

The following describes the modified configuration items.

Realm: http://192.168.175.101:1180/service/tokenCopy the code

(4) Restart the Docker

systemctl daemon-reload
systemctl restart docker.service
Copy the code

(5) Restart Harbor

[root@binghe harbor]# docker-compose down
Stopping harbor-log ... done
Removing nginx             ... done
Removing harbor-portal     ... done
Removing harbor-jobservice ... done
Removing harbor-core       ... done
Removing redis             ... done
Removing registry          ... done
Removing registryctl       ... done
Removing harbor-db         ... done
Removing harbor-log        ... done
Removing network harbor_harbor
 
[root@binghe harbor]# ./prepare
prepare base dir is set to /mnt/harbor
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/nginx/nginx.conf
Clearing the configuration file: /config/core/env
Clearing the configuration file: /config/core/app.conf
Clearing the configuration file: /config/registry/root.crt
Clearing the configuration file: /config/registry/config.yml
Clearing the configuration file: /config/registryctl/env
Clearing the configuration file: /config/registryctl/config.yml
Clearing the configuration file: /config/db/env
Clearing the configuration file: /config/jobservice/env
Clearing the configuration file: /config/jobservice/config.yml
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
loaded secret from file: /secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir
 
[root@binghe harbor]# docker-compose up -d
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-db   ... done
Creating redis       ... done
Creating registry    ... done
Creating registryctl ... done
Creating harbor-core ... done
Creating harbor-jobservice ... done
Creating harbor-portal     ... done
Creating nginx             ... done
 
[root@binghe harbor]# docker ps -a
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS                             PORTS
Copy the code

Install Jenkins (general practice)

1. Install NFS (skip this step if you have installed it before)

Using NFS’s biggest problem is the write permission, you can use kubernetes securityContext/runAsUser specified Jenkins Jenkins user uid in the container to run, in order to specify the NFS directory permissions, Make Jenkins containers writable; It can also be unlimited, so that all users can write. For simplicity, I’m going to make it writable for all users.

If NFS has been installed before, skip this step. Find a host and install NFS. Here, I’ll use NFS on the Master node (Binghe101 server) as an example.

On the cli, enter the following command to install and start NFS.

yum install nfs-utils -y
systemctl start nfs-server
systemctl enable nfs-server
Copy the code

2. Create an NFS shared directory

Create the /opt/ NFS /jenkins-data directory as the NFS shared directory on the Master node (binghe101 server), as shown in the following figure.

mkdir -p /opt/nfs/jenkins-data
Copy the code

Next, edit the /etc/exports file as shown below.

vim /etc/exports
Copy the code

Add the following line configuration to the /etc/exports file.

The/opt/NFS/Jenkins - data 192.168.175.0/24 (rw, all_squash)Copy the code

Kubernetes node IP address range. All_squash maps all access users as nfsnobody user. No matter what user you access, nfsnobody is compressed. Therefore, if the owner of /opt/ NFS /jenkins-data is changed to nfsnobody, then no matter what user accesses it, he/she will have write permission.

This option can be used on many machines by different users who start the process due to user UID irregularities, but is useful when you also want to have write permission on a shared directory.

Next, authorize the /opt/ NFS /jenkins-data directory and reload NFS as shown below.

chown -R 1000 /opt/nfs/jenkins-data/
systemctl reload nfs-server
Copy the code

Verify using the following command on any node in the K8S cluster:

showmount -e NFS_IP
Copy the code

/opt/ NFS /jenkins-data

The details are as follows.

[root@binghe101 ~]# showmount -e 192.168.175.101
Export list for192.168.175.101: /opt/ NFS/Jenkins -data 192.168.175.0/24 [root@binghe102 ~]# showmount -e 192.168.175.101
Export list for192.168.175.101: / opt/NFS/Jenkins - data 192.168.175.0/24Copy the code

3. Create a PV

Jenkins can actually read the previous data by loading the corresponding directory, but since Deployment cannot define a storage volume, we can only use StatefulSet.

Pv is used for StatefulSet. Every StatefulSet startup uses volumeClaimTemplates to create PVCS, so PVCS must be available for PVC binding.

Create the jenkins-pv.yaml file as shown below.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins
spec:
  nfs:
    path: /opt/nfs/jenkins-data
    server: 192.168175.101.
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 1Ti
Copy the code

I’ve given you 1 terabyte of storage, so you can configure it according to the actual situation.

Run the following command to create a PV.

kubectl apply -f jenkins-pv.yaml 
Copy the code

4. Create serviceAccount

Create a service account, because Jenkins later needs to be able to dynamically create slaves, so it must have some permissions.

Create the jenkins-service-account.yaml file as shown below.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create"."delete"."get"."list"."patch"."update"."watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create"."delete"."get"."list"."patch"."update"."watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get"."list"."watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
  - kind: ServiceAccount
    name: jenkins
Copy the code

In the above configuration, a RoleBinding and a ServiceAccount are created and the RoleBinding permissions are bound to this user. Therefore, the Jenkins container must run with this ServiceAccount, otherwise it will not have the RoleBinding permissions.

The RoleBinding permissions are easy to read because Jenkins needs to create and delete slaves. For secrets privileges, it is an HTTPS certificate.

Run the following command to create a serviceAccount.

kubectl apply -f jenkins-service-account.yaml 
Copy the code

5. Install the Jenkins

Create the jenkins-statefulset.yaml file as shown below.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
spec:
  selector:
    matchLabels:
      name: jenkins
  serviceName: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: docker.io/jenkins/jenkins:lts
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
            - containerPort: 32100
          resources:
            limits:
              cpu: 4
              memory: 4Gi
            requests:
              cpu: 4
              memory: 4Gi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              # value: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 - Dhudson. Slaves. NodeProvisioner. MARGIN0 = 0.85
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 - Dhudson. Slaves. NodeProvisioner. MARGIN0 = 0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12 # ~2 minutes
  # PVC template, corresponding to previous PV
  volumeClaimTemplates:
    - metadata:
        name: jenkins-home
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 1Ti
Copy the code

Jenkins deployment needs to pay attention to the number of copies it has, you have to have as many PVS as you have copies, and also how many times storage is consumed. I only used one copy here, so I only created one PV earlier.

Install Jenkins using the following command.

kubectl apply -f jenkins-statefulset.yaml 
Copy the code

6. Create the Service

Create the jenkins-service.yaml file as shown below.

apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  # type: LoadBalancer
  selector:
    name: jenkins
  # ensure the client ip is propagated to avoid the invalid crumb issue when using LoadBalancer (k8s >=1.7)
  #externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      nodePort: 31888
      targetPort: 8080
      protocol: TCP
    - name: jenkins-agent
      port: 32100
      nodePort: 32100
      targetPort: 32100
      protocol: TCP
  type: NodePort
Copy the code

Use the following command to install Service.

kubectl apply -f jenkins-service.yaml 
Copy the code

7. Install ingress

Jenkins’ Web interface needs to be accessed from outside the cluster, and here we have chosen to use ingress. Create the jenkins-ingress.yaml file as shown below.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jenkins
spec:
  rules:
    - http:
        paths:
          - path: /
            backend:
              serviceName: jenkins
              servicePort: 31888
      host: jekins.binghe.com
Copy the code

Note that host must be set to a domain name or host name, otherwise an error will be reported, as shown below.

The Ingress "jenkins" is invalid: spec.rules[0].host: Invalid value: "192.168.175.101": must be a DNS name, not an IP address
Copy the code

Install the ingress using the following command.

kubectl apply -f jenkins-ingress.yaml 
Copy the code

Finally, since THE VM is used to set up the environment, you need to configure the hosts file of the local host when accessing jekins.binghe.com mapped from the VM. Add the following configuration items to the hosts file of the local host.

192.168.175.101 jekins.binghe.com
Copy the code

Note: On Windows, the hosts file is stored in the following directory:

C:\Windows\System32\drivers\etc
Copy the code

Next, you can access Jekins in your browser with the link: jekins.binghe.com:31888.

Install the SVN on the physical machine

In this example, the SVN is installed on the Master node (binghe101 server).

1. Install the SVN using yum

Run the following command on the CLI to install the SVN.

yum -y install subversion 
Copy the code

2. Create a SVN library

Run the following commands in sequence.

# to create/data/SVN
mkdir -p /data/svn 
Initialize the SVN
svnserve -d -r /data/svn
Create a repository
svnadmin create /data/svn/test
Copy the code

3. Configure SVN

mkdir /data/svn/conf
cp /data/svn/test/conf/* /data/svn/conf/
cd /data/svn/conf/
[root@binghe101 conf]# ll20 -rw-r--r-- 1 root root 1080 5月 12 02:17 authz-rw-r --r-- 1 root root 885 5月 12 02:17 env.mppl-rw-r --r-- 1 root root 885 5月 12 02:17 env.mppl-rw-r --r-- 1 Root root 309 May 12 02:17 passwd -rw-r--r-- 1 root root 4375 May 12 02:17 SVNserve.confCopy the code
  • Configure the authz file,
vim authz
Copy the code

The configuration information is as follows:

[aliases]
# joe = /C=XZ/ST=Dessert/L=Snake City/O=Snake Oil, Ltd./OU=Research Institute/CN=Joe Average

[groups]
# harry_and_sally = harry,sally
# harry_sally_and_joe = harry,sally,&joe
SuperAdmin = admin
binghe = admin,binghe

# [/foo/bar]
# harry = rw
# &joe = r
# * =

# [repository:/baz/fuz]
# @harry_and_sally = rw
# * = r

[test:/]
@SuperAdmin=rw
@binghe=rw
Copy the code
  • Configure the passwd file
vim passwd
Copy the code

The configuration information is as follows:

[users]
# harry = harryssecret
# sally = sallyssecret
admin = admin123
binghe = binghe123
Copy the code
  • Configuration svnserve. Conf
vim svnserve.conf
Copy the code

The configuration file is as follows.

### This file controls the configuration of the svnserve daemon, if you
### use it to allow access to this repository. (If you only allow
### access through http: and/or file: URLs, then this file is
### irrelevant.)

### Visit http://subversion.apache.org/ for more information.

[general]
### The anon-access and auth-access options control access to the
### repository for unauthenticated (a.k.a. anonymous) users and
### authenticated users, respectively.
### Valid values are "write", "read", and "none".
### Setting the value to "none" prohibits both reading and writing;
### "read" allows read-only access, and "write" allows complete 
### read/write access to the repository.
### The sample settings below are the defaults and specify that anonymous
### users have read-only access to the repository, while authenticated
### users have read and write access to the repository.
anon-access = none
auth-access = write
### The password-db option controls the location of the password
### database file. Unless you specify a path starting with a /,
### the file's location is relative to the directory containing
### this configuration file.
### If SASL is enabled (see below), this file will NOT be used.
### Uncomment the line below to use the default password file.
password-db = /data/svn/conf/passwd
### The authz-db option controls the location of the authorization
### rules for path-based access control. Unless you specify a path
### starting with a /, the file's location is relative to the
### directory containing this file. The specified path may be a
### repository relative URL (^/) or an absolute file:// URL to a text
### file in a Subversion repository. If you don't specify an authz-db,
### no path-based access control is done.
### Uncomment the line below to use the default authorization file.
authz-db = /data/svn/conf/authz
### The groups-db option controls the location of the file with the
### group definitions and allows maintaining groups separately from the
### authorization rules. The groups-db file is of the same format as the
### authz-db file and should contain a single [groups] section with the
### group definitions. If the option is enabled, the authz-db file cannot
### contain a [groups] section. Unless you specify a path starting with
### a /, the file's location is relative to the directory containing this
### file. The specified path may be a repository relative URL (^/) or an
### absolute file:// URL to a text file in a Subversion repository.
### This option is not being used by default.
# groups-db = groups
### This option specifies the authentication realm of the repository.
### If two repositories have the same authentication realm, they should
### have the same password database, and vice versa. The default realm
### is repository's uuid.
realm = svn
### The force-username-case option causes svnserve to case-normalize
### usernames before comparing them against the authorization rules in the
### authz-db file configured above. Valid values are "upper" (to upper-
### case the usernames), "lower" (to lowercase the usernames), and
### "none" (to compare usernames as-is without case conversion, which
### is the default behavior).
# force-username-case = none
### The hooks-env options specifies a path to the hook script environment 
### configuration file. This option overrides the per-repository default
### and can be used to configure the hook script environment for multiple 
### repositories in a single file, if an absolute path is specified.
### Unless you specify an absolute path, the file's location is relative
### to the directory containing this file.
# hooks-env = hooks-env

[sasl]
### This option specifies whether you want to use the Cyrus SASL
### library for authentication. Default is false.
### Enabling this option requires svnserve to have been built with Cyrus
### SASL support; to check, run 'svnserve --version' and look for a line
### reading 'Cyrus SASL authentication is available.'
# use-sasl = true
### These options specify the desired strength of the security layer
### that you want SASL to provide. 0 means no encryption, 1 means
### integrity-checking only, values larger than 1 are correlated
### to the effective key length for encryption (e.g. 128 means 128-bit
### encryption). The values below are the defaults.
# min-encryption = 0
# max-encryption = 256
Copy the code

Next, copy the SVNserve. conf file in /data/ SVN /conf to /data/ SVN /test/conf/. As shown below.

[root@binghe101 conf]# cp /data/svn/conf/svnserve.conf /data/svn/test/conf/Cp: indicates whether to overwrite'/data/svn/test/conf/svnserve.conf'? yCopy the code

4. Start the SVN service

(1) Create the SVNserve. service service

Create the SVNserve. service file

vim /usr/lib/systemd/system/svnserve.service
Copy the code

The contents of the file are as follows.

[Unit] Description=Subversion protocol daemon After=syslog.target network.target Documentation=man:svnserve(8) [Service]  Type=forking EnvironmentFile=/etc/sysconfig/svnserve#ExecStart=/usr/bin/svnserve --daemon --pid-file=/run/svnserve/svnserve.pid $OPTIONS
ExecStart=/usr/bin/svnserve --daemon $OPTIONS
PrivateTmp=yes

[Install]
WantedBy=multi-user.target
Copy the code

Run the following command for the configuration to take effect.

systemctl daemon-reload
Copy the code

After the command is successfully executed, modify the /etc/sysconfig/SVNserve file.

vim /etc/sysconfig/svnserve 
Copy the code

The content of the modified file is as follows:

# OPTIONS is used to pass command-line arguments to svnserve.
# 
# Specify the repository location in -r parameter:
OPTIONS="-r /data/svn"
Copy the code

(2) Start the SVN

Check the SVN status, as shown below.

[root@itence10 conf]# systemctl status svnserve.serviceLow svnserve. Service - Subversion daemon the Loaded protocol: the Loaded (/ usr/lib/systemd/system/svnserve. Service; disabled; vendor preset: disabled) Active: inactive (dead) Docs: man:svnserve(8)Copy the code

You can see that the SVN is not started at this time. Then, you need to start the SVN.

systemctl start svnserve.service
Copy the code

Set the SVN service to start automatically upon startup.

systemctl enable svnserve.service
Copy the code

SVN ://192.168.0.10/test, binghe, password binghe123

Physical machine install Jenkins

Note: Before installing Jenkins, you need to install JDK and Maven. Here, I also installed Jenkins on the Master node (Binghe101 server).

1. Enable Jenkins library

Run the following command to download the REPO file and import the GPG key:

wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
Copy the code

2. Install the Jenkins

Run the following command to install Jenkis.

yum install jenkins
Copy the code

Next, modify the Jenkins default port as shown below.

vim /etc/sysconfig/jenkins
Copy the code

The two configurations after modification are as follows:

JENKINS_JAVA_CMD="/ usr/local/jdk1.8.0 _212 / bin/Java"
JENKINS_PORT="18080"
Copy the code

At this point, Jenkins port has been changed from 8080 to 18080

3. Start Jenkins

Enter the following command on the command line to start Jenkins.

systemctl start jenkins
Copy the code

Configure Jenkins to start automatically upon startup.

systemctl enable jenkins
Copy the code

Check out Jenkins’ running status.

[root@itence10 ~]# systemctl status jenkins● Jenkins. Service - LSB: Jenkins Automation Server Loaded: Loaded (/etc/rc.d/init.d/ Jenkins; generated) Active: active (running) since Tue 2020-05-12 04:33:40 EDT; 28s ago Docs: man:systemd-sysv-generator(8) Tasks: 71 (limit: 26213) Memory: 550.8 MCopy the code

That means Jenkins has successfully started.

Configure the Jenkins operating environment

1. Login Jenkins

After the initial installation, you need to configure the Jenkins operating environment. First of all, in your browser’s address bar to access links to http://192.168.0.10:18080, open the Jenkins interface.

Use the following command to find the password value on the server as prompted, as shown below.

[root@binghe101 ~]# cat /var/lib/jenkins/secrets/initialAdminPassword
71af861c2ab948a1b6efc9f7dde90776
Copy the code

Password 71 af861c2ab948a1b6efc9f7dde90776 copied into the text box, and click continue. The custom Jenkins page will jump to, as shown below.

Here, you can directly select “Install recommended plug-ins”. You jump to a plug-in installation page, as shown below.

This step may cause a download failure. You can ignore this step.

2. Install the plug-in

Plug-ins to install

  • Kubernetes Cli Plugin: This Plugin can be operated directly in Jenkins using Kubernetes command line.

  • Kubernetes Plugin: To use Kubernetes, install this plugin

  • Kubernetes Continuous Deploy Plugin: Kubernetes Continuous Deploy Plugin can be used as required

There are more plug-ins to choose from, you can click System Administration -> Manage Plug-ins to manage and add, install the corresponding Docker plug-in, SSH plug-in, Maven plug-in. Other plug-ins can be installed as needed. As shown in the figure below.

3. The configuration Jenkins

(1) Configure JDK and Maven

Configure JDK and Maven in Global Tool Configuration. The Global Tool Configuration screen is displayed.

Now it’s time to configure the JDK and Maven.

Because I installed Maven in /usr/local/maven-3.6.3 on the server, I need to configure it in Maven Configuration, as shown in the following figure.

Next, configure the JDK as shown below.

Note: Do not check “Install Automatically”

Next, configure Maven as shown below.

Note: Do not check “Install Automatically”

(2) Configure SSH

Enter Jenkins’ Configure System interface to Configure SSH, as shown below.

Locate the SSH remote hosts and configure the SSH remote hosts.

Once configured, click the Check Connection button to display Successfull Connection. As shown below.

At this point, Jenkins’ basic configuration is complete.

Jenkins releases Docker project to K8s cluster

1. Adjust the SpringBoot configuration

To implement this, the pom.xml of the module where the boot class is located in the SpringBoot project needs to introduce the configuration packaged as a Docker image, as shown below.

  <properties>
  	 	<docker.repostory>192.168.0.10:1180</docker.repostory>
        <docker.registry.name>test</docker.registry.name>
        <docker.image.tag>1.0.0</docker.image.tag>
        <docker.maven.plugin.version>1.4.10</docker.maven.plugin.version>
  </properties>

<build>
  		<finalName>test-starter</finalName>
		<plugins>
            <plugin>
			    <groupId>org.springframework.boot</groupId>
			    <artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
			
			<! - the docker maven plug-in, website: https://github.com/spotify/docker ‐ maven ‐ plugin - >
			<! -- Dockerfile maven plugin -->
			<plugin>
			    <groupId>com.spotify</groupId>
			    <artifactId>dockerfile-maven-plugin</artifactId>
			    <version>${docker.maven.plugin.version}</version>
			    <executions>
			        <execution>
			        <id>default</id>
			        <goals>
			            <! Comment out the goal if you don't want to use the docker package.
			            <goal>build</goal>
			            <goal>push</goal>
			        </goals>
			        </execution>
			    </executions>
			    <configuration>
			    	<contextDirectory>${project.basedir}</contextDirectory>
			        <! Harbor username and password -->
			        <useMavenSettingsForAuth>useMavenSettingsForAuth>true</useMavenSettingsForAuth>
			        <repository>${docker.repostory}/${docker.registry.name}/${project.artifactId}</repository>
			        <tag>${docker.image.tag}</tag>
			        <buildArgs>
			            <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
			        </buildArgs>
			    </configuration>
			</plugin>

        </plugins>
        
		<resources>
			<! SRC /main/resources -->
			<resource>
				<directory>src/main/resources</directory>
				<targetPath>${project.build.directory}/classes</targetPath>
				<includes>
					<include>/ * * *</include>
				</includes>
				<filtering>true</filtering>
			</resource>
		</resources>
	</build>
Copy the code

Next, create a Dockerfile in the root directory of the module where the SpringBoot boot class resides, as shown in the following example.

Docker: Docker: Docker: Docker: Docker: Docker: Docker: Docker: Docker: Docker: Docker: DockerThe FROM 192.168.0.10:1180 / library/Java: 8# specify the image maker
MAINTAINER binghe
# run directory
VOLUME /tmp
Copy local files to the container
ADD target/*jar app.jar
The command that is automatically executed after the container is started
ENTRYPOINT [ "java"."-Djava.security.egd=file:/dev/./urandom"."-jar"."/app.jar" ]
Copy the code

Modify it according to the actual situation.

Note: the FROM 192.168.0.10:1180 / library/Java: 8 is the premise of executing the following command.

Docker pull Java: 8 docker tag Java: 8 192.168.0.10:1180 / library/Java: 8 docker login 192.168.0.10:1180 docker push 192.168.0.10:1180 / library/Java: 8Copy the code

Create a yaml file in the root directory of the module where the SpringBoot boot class is located, and enter the file named test.yaml as shown below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-starter
  labels:
    app: test-starter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-starter
  template:
    metadata:
      labels:
        app: test-starter
    spec:
      containers:
      - name: test-starter
        image: 192.168. 010.: 1180 / test/test - the starter: 1.0.0
        ports:
        - containerPort: 8088
      nodeSelector:
        clustertype: node12

---
apiVersion: v1
kind: Service
metadata:
  name: test-starter
  labels:
    app: test-starter
spec:
  ports:
    - name: http
      port: 8088
      nodePort: 30001
  type: NodePort
  selector:
    app: test-starter
Copy the code

2.Jenkins configures the release project

Upload the project to the SVN codebar, for example, SVN ://192.168.0.10/test

Next, configure automatic publishing in Jenkins. The steps are as follows.

Click New Item.

Enter the description information in the description text box, as shown below.

Next, configure SVN information.

Note: The steps for configuring GitLab are the same as those for SVN.

Navigate to Jenkins’ “Building blocks” and use Execute Shell to build release projects into the K8S cluster.

Run the following commands.

# Delete a local image from the Harbor repositoryDocker rmi 192.168.0.10:1180 /test/ test - the starter: 1.0.0# Use Maven to compile and build the Docker image. After completion, the local Docker container will rebuild the image file
/usr/local/maven-3.6.3/bin/ MVN -f./pom. XML clean install -dmaven.test.skip =true
Log in to Harbor WarehouseDocker login 192.168.0.10:1180 -u binghe -p Binghe123# upload image to Harbor warehouseDocker push 192.168.0.10:1180 /test/ test - the starter: 1.0.0# Stop and delete the K8S cluster
/usr/bin/kubectl delete -f test.yaml
Republish the Docker image to the K8S cluster
/usr/bin/kubectl apply -f test.yaml
Copy the code

Ok, that’s enough for today. I’m Glacier. See you next time