Source: Bitllion’s private blog

Production environment centoS7

NFS server centos7

Management server Ubuntu18.04

The general process: install K3S container choreography system, use Docker to install Rancher OS, hand OVER K3S to Rancher for management (Web visualization), and use Rancher to deploy other applications

Install K3S

Choose K3S over K8S: lightweight, use Container instead of full Docker, add SQLite native support, use Traefik native support for K8S network load balancing instead of old Nginx-ingress

The service and client nodes do the same:

If your server is in a foreign country or has a firewall agent, run the following command:

wget -O k3s https://get.k3s.io && chmod +x k3s && ./k3s --no-deploy traefik
Copy the code
  • 1. Use an agent to download k3S and AirGap images

    Ps: The following is the image I put in the domestic public cloud storage, which may not be the latest, so md5 verification will be wrong

    wget https://k3s-images.oss-cn-qingdao.aliyuncs.com/k3s 
    wget https://k3s-images.oss-cn-qingdao.aliyuncs.com/k3s-airgap-images-amd64.tar
    Copy the code
  • 2. Grant the execution permission to K3S

    chmod +x k3s
    Copy the code
  • 3. Add AIRGap to K3S (AirGap is the container system of K3S)

    mkdir -p /var/lib/rancher/k3s/agent/images/
    cp k3s-airgap-images-amd64.tar /var/lib/rancher/k3s/agent/images/
    Copy the code
  • 4. Install K3S without Traefik (nginx-ingress for now)

    ./k3s --no-deploy traefik
    Copy the code
  • 5. Check whether the verification is successful

    kubectl get node
    Copy the code

    ! If no hostname is returned or an error is returned, please set hostname. K8s network determines name according to the value of local hostname. If hostname is the original localhost value, an error will be reported

Docker installation and configuration

New:

  • 0.0 Snap Installs docker

19.09.08 Update: Recommend using Snap package manager, a new package manager that supports multiple Linux distributions. It takes advantage of virtual space, so it solves the software API problem and makes Linux as unified and easy as Windows in terms of software installation

  • You can use the following command to install snap:snap install docker
  • 0.2 centos platform: If no EPEL warehouse is added, install it first
    yum install epel-release
    Copy the code

    Then install Snapd

    yum install snapd
    Copy the code

    Gives effect to the socket

    systemctl enable --now snapd.socket
    Copy the code

    Adding a Soft Connection

    ln -s /var/lib/snapd/snap /snap
    Copy the code

    Install the docker

    snap install docker
    Copy the code

Old:

  • Ubuntu 1.1:
    • 1.1.2 Installation to support HTTPS repository transfer
      apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
      Copy the code
    • 1.1.3 Installing public Keys
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      Copy the code
    • 1.1.4 Adding warehouse sources and updating sources
      add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" && apt update
      Copy the code
    • 1.1.5 installation docker
      apt install docker-ce docker-ce-cli containerd.io
      Copy the code
  • Centos 1.2
    • 1.2.1 Installing HTTPS Support
      yum install -y yum-utils device-mapper-persistent-data lvm2
      Copy the code
    • 1.2.2 Adding a Warehouse
      yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
      Copy the code
    • 1.2.3 installation docker
      yum install docker-ce docker-ce-cli containerd.io
      Copy the code
  • 2. Add the image accelerator source, here we use ali Cloud container image accelerator:

Rancher deployment

Rancher is the official Web DOC for managing the Kubernetes cluster

  • 1. Pull and run rancher

    docker run -d --restart=unless-stopped -p 8443:443 --name rancher -v /data/rancher/:/var/lib/rancher rancher/rancher
    Copy the code

    Open https://yourmaster_ip:18443 to reset your password and log in; Attention! Here you need to replace the IP address with the internal IP address (the master_IP address accessible to the internal cluster, of course if you need to add a cluster with other public addresses, then use the default)

  • 2. Deploy the K3S cluster

    • Click on theAdd the cluster
    • Import the existing Kubernetes clusterTo create
    • nodeThe node executes the last command

    After a few minutes, the Acitve deployment is complete

  • 3. Add helm domestic source in app Store

The name of the The store url branch scope
helm Mirror. Azure. Cn/kubernetes /… master global
incubator Mirror. Azure. Cn/kubernetes /… master global
  • 4. Rancher persistent configuration

  • 4.1 NFS Persistency The NFS host configures the persistent directory /data to enable the NFS server

    yum -y install rpcbind nfs-utils
    mkdir /data -p
    Copy the code

    Configure the mount directory and set the corresponding permissions. Add insecure to allow ports above 1024

    echo "/data *(insecure,rw,no_root_squash,sync)" >> /etc/exports
    Copy the code

    Reload the configuration file to restart the NFS service

    exportfs -r
    systemctl start rpcbind nfs-server
    systemctl enable rpcbind nfs-server
    Copy the code

    Run the showmount -e localhost command to check whether the NFS server is mounted.

    Export list for192.168.6.39: / data *Copy the code
  • 4.2 Example: Persistent process in Rancher

    (1) Add a persistent volume (PV) to the cluster. NFS plugin (if you have not configured NFS, you can use other methods, such as local node path/disk), access mode multi-host read and write, same as below

    ②Default: Add a PVC and use the existing PV

    ③Default: The service is deployed in the workload and the data volume is selected from the existing PVC

    Conclusion: A persistent application must have PV volume and PVC storage

Network optimization for nginx-ingress

Install nginx-ingress in the app Store in the System project

Usage: In non-system projects,

① In the workload port mapping, select cluster IP as the network mode

② Add load balancing for the corresponding application, if you use the generic domain name resolution to the management node IP, such as in the DNS resolution you have *.app.cloud.cn >> rancher_IP such resolution

Assuming there is workload (application) nginx, you can add such ingress load balancer, domain name nginx.app.cloud.cn, workload nginx, container port 80

Access to nginx.app.cloud.cn will load balance the nginx service. Similarly, you can access wordpress.app.cloud.cn to resolve the wordpress service

Nginx-ingress enables a cluster of single IP nodes to run different Web services without port conflicts. You can even deploy multiple NginX on the same cluster, and these “same” NginX can be “independent” of each other. This is similar to a BGP network. However, if the backup service is still running, the traffic will be imported to the backup service, which avoids single node failure and enables the service to run for a long time. In addition, the upgrade of the new service can be carried out in parallel with that of the old service, giving users enough time to transition to the new service

Deploy gitLab Community Edition

  • 1. Add name Pv-gitlab (custom) Persistent storage PV, mount point /data/pvgitlab, access mode Multi-host read/write (NFS mode)

  • 2. Add the PVC of Gitlab, named PVC-Gitlab, and use the NEWLY created PV

  • 3. Add gitLab-CE workload (application) :

    • 3.1 You can directly import a prepared YAML file for deployment

    • 3.2 If UI is used, ensure the following conditions:

      Open ports 80 and 22 of the container and map them to user-defined ports. Mount data volumes: /var/log/gitlab logs /var/opt/gitlab data /etc/gitlab configCopy the code

      The gitlab.rb file under /data/pvgitlab/config/ is the gitlab configuration file

  • 4. Modify gitLab configuration in host computer:

    vim /data/gitlab/config/gitlab.rb
    Copy the code

    Add:

    external_url 'http://yourgitlab_ip:port'
    nginx['listen_port'] = 80
    nginx['listen_https'] = false
    gitlab_rails['gitlab_shell_ssh_port'] = 32222
    Copy the code

    By default, nginx listens on the IP :port of external_URL, so we need to redefine the actual port that nginx listens on. Gitlab_shell_ssh_port just changed the SSH clone address the front, while the actual gitlab SSH 22 port in listening can use cat in gitlab container/var/log/gitlab/SSHD/current verify this, So you cannot change the container’s port 22 to 32222 in the Rancher-GitLab workload, otherwise it will be rejected outright or you will not be able to verify SSH public keys deployed on GitLab

Integrated Jenkins

  • 1. Add the workload
  • 2. To get the login password: Jenkins workloads, click on it with three clicks to the right, where you can target individual workloadsExecute command line, the use ofcat /var/jenkins_home/secrets/initialAdminPasswordObtain corresponding key
  • 3. Change the source:https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json
  • 4. Installgitlab,ssh,gitlab hook,authenThe plug-in

The integration of runner

Gitlab-runner’s load balancer executes the command line

Sign up for a Runner

gitlab-runner register
Copy the code

Fill in the URL and token provided by http://your_gitlab_ip/admin/runners as prompted. Fill in the rest according to your personal requirements. Select Kubernetes for execution environment

Integrated the openldap

Waiting…

Gitlab integrates with K3S network (there is a bug, GitLab needs to be independent of Rancher to install helm, not recommended for now)

To add the Kubernetes cluster in GitLab, we need to complete the information

In Rancher Web clustering here you can use the Kubectl command line to interact with K3s

! Note: The new version of GitLab disables webhooks on local networks by default, So Admin->settting-> Network ->Outbound Requests check Allow Requests to the local network from Web hooks and services Or add a local IP address whitelist

  • 1.URL

    https://localhost:6443 Replace the IP address with the NODE IP address

  • 2. Obtain the name of the certificate obtained by the CA

    kubectl get secrets
    Copy the code

    Decode again, replace the name below with the name returned

    kubectl get secret name -o jsonpath="{['data']['ca\.crt']}" | base64 --decode
    Copy the code
  • 3. Get service token (TOKE)

    • 3.1 Creating a service account and assigning it the cluster administrator role
      cat <<EOF | kubectl apply -f -
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: gitlab-admin
        namespace: kube-system
      ---
      apiVersion: rbac.authorization.k8s.io/v1beta1
      kind: ClusterRoleBinding
      metadata:
        name: gitlab-admin
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
      - kind: ServiceAccount
        name: gitlab-admin
        namespace: kube-system
      EOF
      Copy the code
    • 3.2 Retrieving key Resources
      SECRET=$(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')
      Copy the code
    • 3.3 to extract the token
       TOKEN=$(kubectl -n kube-system get secret $SECRET -o jsonpath='{.data.token}' | base64 --decode)
      Copy the code
      echo $TOKEN
      Copy the code