Sorted out for a long time, to a wave of dry goods 😄🎉

Prepare the K8S cluster machine

One K8S deployment machine (Fortress) 1 GB or more 3 K8S master nodes 2 GB or more 3 K8S nodes 2 GB or more

Fixed IP addresses are assigned to all the above 7 machines

The machine ip
K8s-ha-master1 172.16.67.130
K8s-ha-master2 172.16.67.131
K8s-ha-master3 172.16.67.132
K8s-ha-node1 172.16.67.135
K8s-ha-node2 172.16.67.136
K8s-ha-node3 172.16.67.137
K8s-ha-deploy 172.16.67.140
Install the K8S cluster

Log in to the deployment machine and generate an SSH key. Run the ssh-keygen -t rsa -b 4096 -c “[email protected]” command to copy the public key to all k8S machines

Ssh-copy-id 172.16.67.130 ssh-copy-id 172.16.67.131 ssh-copy-id 172.16.67.132 ssh-copy-id 172.16.67.135 ssh-copy-id 172.16.67.136 ssh-copy-id 172.16.67.137 ssh-copy-id 172.16.67.140Copy the code

Download the K8S Docker installation kit

git clone https://github.com/gjmzj/kubeasz.git
mkdir -p /etc/ansible
mv kubeasz/* /etc/ansible
Copy the code

Refer to this document at https://github.com/gjmzj/kubeasz/blob/master/docs/setup/quickStart.md to download binaries and offline docker k8s cluster need image and extract

  • pan.baidu.com/s/1c4RFaA

Configure the above machines into Ansible

cd /etc/ansible && cp example/hosts.m-masters.example hosts
Copy the code
# Cluster deployment node: Chrony time synchronization is installed in the cluster. [deploy] 127.0.0.1 NTP_ENABLED= NO # The etcd cluster must provide the following NODE_NAME. Note that the etcd cluster must be 1,3,5,7... [etCD] 172.16.67.130 NODE_NAME=etcd1 172.16.67.131 NODE_NAME=etcd2 172.16.67.132 NODE_NAME=etcd3 [new-etcd] # #192.168.1.x NODE_NAME=etcdx [kube-master] 172.16.67.130 172.16.67.131 172.16.67.132 [new-master] # #192.168.1.5 [kube-node] 172.16.67.137 NEW_NODE=yes 172.16.67.136 NEW_NODE=yes 172.16.67.135 [new-node] # Use the #192.168.1.xx # parameter NEW_INSTALL: # If you do not use the domain name, HARBOR_DOMAIN="" [harbor] #192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no Haproxy +keepalived [lb] 172.16.67.130 LB_ROLE=backup 172.16.67.131 LB_ROLE=master [ex-LB] #192.168.1.6 LB_ROLE=backup EX_VIP=192.168.1.250 #192.168.1.7 LB_ROLE=master EX_VIP = 192.168.1.250 [all: vars] # -- -- -- -- -- -- -- -- -- cluster main parameters -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- # cluster deployment patterns: Allinone, single-master, multi-master DEPLOY_MODE=multi-master # Main version of the cluster V1.8, v1.9, v1.10, v1.11, v1.12, v1.13 K8S_VER="v1.13" On the public cloud, use the cloud load balancing Intranet address and listening port MASTER_IP="172.16.67.165" KUBE_APISERVER="https://{{MASTER_IP}}:8443" # Cilium CLUSTER_NETWORK=" Flannel "# Cilium CLUSTER_NETWORK=" Flannel" SERVICE_CIDR="10.68.0.0/16" # POD (Cluster CIDR) CLUSTER_CIDR="172.20.0.0/16" # service port Range (NodePort Range) NODE_PORT_RANGE=" 2000-40000 "# kubernetes service IP address (Pre-allocation, CLUSTER_KUBERNETES_SVC_IP="10.68.0.1" # cluster DNS service IP (pre-allocated from SERVICE_CIDR) CLUSTER_DNS_DOMAIN="cluster.local." # Username and password used by cluster basic auth BASIC_AUTH_USER = "admin" BASIC_AUTH_PASS = "test1234" # -- -- -- -- -- -- -- -- -- additional parameters -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - # default binary file directory Bin_dir ="/opt/kube/bin" # certificate directory ca_dir="/etc/kubernetes/ SSL "# Deployment directory, which is the ansible working directory, do not modify base_dir="/etc/ansible"Copy the code

Run Ansible to install the cluster

ansible-playbook 01.prepare.yml
ansible-playbook 02.etcd.yml
ansible-playbook 03.docker.yml
ansible-playbook 04.kube-master.yml
ansible-playbook 05.kube-node.yml
ansible-playbook 06.network.yml
ansible-playbook 07.cluster-addon.yml 
Copy the code
Install Rancher and import the K8S cluster

Start Rancher with a Rancher image

docker run -d --name=rancher --restart=unless-stopped \
  -p 8880:80 -p 8843:443 \
  -v ~/rancher:/var/lib/rancher \
  rancher/rancher:stable
Copy the code

Login IP :8843 to view the result

Import the K8S cluster

Generate configuration

Get the generated configuration and execute it on the K8S deployer

Curl, insecure - sfL https://172.16.123.1:8843/v3/import/7gtwrh84nlpgkn48pj26lrzv4c8bt4mjl9f7r5w2sfprbt82tkdk6f.yaml | kubectl apply -f -Copy the code

View the imported cluster on Rancher

Gateway and project mirror push Ali cloud

Package and push the Java image to Ali Cloud, refer to :github.com/neatlife/jf…

docker build -t jframework .
docker tag jframework:latest registry.cn-hangzhou.aliyuncs.com/suxiaolin/jframework:latest
docker push registry.cn-hangzhou.aliyuncs.com/suxiaolin/jframework:latest
Copy the code

Prepare the gateway K8S configuration file

{
  "kind": "DaemonSet".
  "apiVersion": "extensions/v1beta1".
  "metadata": {
    "name": "gateway".
    "namespace": "default".
    "labels": {
      "k8s-app": "gateway"
    },
    "annotations": {
      "deployment.kubernetes.io/revision": "2"
    }
  },
  "spec": {
    "selector": {
      "matchLabels": {
        "k8s-app": "gateway"
      }
    },
    "template": {
      "metadata": {
        "name": "gateway".
        "labels": {
          "k8s-app": "gateway"
        }
      },
      "spec": {
        "containers": [
        {
          "name": "gateway".
          "ports": [
            {
              "containerPort": 8080.
              "hostPort": 8080.
              "name": "8080tcp80800".
              "protocol": "TCP"
            }
          ].
          "image": "registry.cn-hangzhou.aliyuncs.com/suxiaolin/gateway:latest".
          "readinessProbe": {
            "httpGet": {
              "scheme": "HTTP".
              "path": "/actuator/info".
              "port": 8080
            },
            "initialDelaySeconds": 10.
            "periodSeconds": 5
          },
          "resources": {

          },
          "terminationMessagePath": "/dev/termination-log".
          "terminationMessagePolicy": "File".
          "imagePullPolicy": "Always".
          "securityContext": {
            "privileged": false.
            "procMount": "Default"
          }
        }
        ].
        "restartPolicy": "Always".
        "terminationGracePeriodSeconds": 30.
        "dnsPolicy": "ClusterFirst".
        "securityContext": {

        },
        "schedulerName": "default-scheduler"
      }
    },
    "revisionHistoryLimit": 10
  }
}
Copy the code

Prepare the project K8S configuration file

{
  "kind": "Deployment".
  "apiVersion": "extensions/v1beta1".
  "metadata": {
    "name": "jframework".
    "namespace": "default".
    "labels": {
      "k8s-app": "jframework"
    },
    "annotations": {
      "deployment.kubernetes.io/revision": "2"
    }
  },
  "spec": {
    "replicas": 1.
    "selector": {
      "matchLabels": {
        "k8s-app": "jframework"
      }
    },
    "template": {
      "metadata": {
        "name": "jframework".
        "labels": {
          "k8s-app": "jframework"
        }
      },
      "spec": {
        "containers": [
          {
            "name": "jframework".
            "image": "registry.cn-hangzhou.aliyuncs.com/suxiaolin/jframework:latest".
            "readinessProbe": {
              "httpGet": {
                "scheme": "HTTP".
                "path": "/heartbeat".
                "port": 8080
              },
              "initialDelaySeconds": 10.
              "periodSeconds": 5
            },
            "resources": {
              
            },
            "terminationMessagePath": "/dev/termination-log".
            "terminationMessagePolicy": "File".
            "imagePullPolicy": "Always".
            "securityContext": {
              "privileged": false.
              "procMount": "Default"
            }
          }
        ].
        "restartPolicy": "Always".
        "terminationGracePeriodSeconds": 30.
        "dnsPolicy": "ClusterFirst".
        "securityContext": {
          
        },
        "schedulerName": "default-scheduler"
      }
    },
    "strategy": {
      "type": "RollingUpdate".
      "rollingUpdate": {
        "maxUnavailable": "25%".
        "maxSurge": "25%"
      }
    },
    "revisionHistoryLimit": 10.
    "progressDeadlineSeconds": 600
  }
}
Copy the code

Import the project on Rancher and view the results

The gateway can access the application cluster using the built-in DNS domain name of K8S, for example, jframework.default:8080

The BUILT-IN DNS of K8S has its own ETCD load balancing

Installation and Configuration Center

Download the Apollo Docker toolkit and start

git clone https://github.com/ctripcorp/apollo.git
cd apollo/scripts/docker-quick-start/
docker-compose up -d
Copy the code

See the effect

Install the elk

Download the Elk Docker toolkit github.com/deviantony/… And start the

git clone https://github.com/deviantony/docker-elk.git
cd docker-elk
docker-compose up -d
Copy the code

Access IP address :5601 to view the result

Install the pinpoint

Download the PinPont Docker toolkit and start it

git clone https://github.com/naver/pinpoint-docker.git
cd pinpoint-docker
docker-compose up -d pinpoint-hbase pinpoint-mysql pinpoint-web pinpoint-collector pinpoint-agent zoo1 zoo2 zoo3 jobmanager taskmanager
Copy the code

Access IP address :8079 to view the result

Configure Jenkins project release

Create a Maven build project on Jenkins, then start the cluster using Rancher

/opt/rancher/rancher kubectl apply -f k8s.yml
Copy the code
Configure aliccloud LSB load balancing

On the ali cloud SLBS slb.console.aliyun.com/slb/cn-hang…

Create a load balancer and point to the gateway IP address

Tool set

And the Java CI/CD environment Setup article

tool role
Nexus The maven server
jenkins Automatic packaging/publishing
docker Application vM
gitlab Source code management
yearning SQL audit
Sonarqube Code quality review
maven&&gradle Project packaging tool
kubectl K8s cluster control tool
K8s Project Operating environment
rancher Simplified K8S administration tools
Apollo configuration Center Manage the project cluster configuration
pinpoint Project abnormal operation monitoring
elk Apply the log collection tool
Ali cloud SLBS Load balancing
ansible Linux command automation tool
showdoc Project Document Management

Continuously updated…