Build Docker+Kubernetes environment, then deploy microservices, and finally open external access.

Due to network problems, the download and installation process may fail, patience to install a few times OK.

Installation must be patient!! Installation must be patient!! Installation must be patient!!

The environment

Centos: 7.5

Docker: 19.03

Kubernetes: 1.1.18

Prepare two CentOS machines, only two machines for the test, one for the Master and one for the Worker.

Install the Docker

Use scripts to install Docker
Log in to Centos as user root
Make sure the YUM package is up to date
yum update
Copy the code
Execute the Docker installation script
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
Copy the code

Executing this script adds the docker.repo source and installs the Docker.

Start Docker and set it to boot
systemctl start docker
systemctl enable docker
Copy the code

So far, the installation of Docker in Centos system is complete.

Configuration mirror accelerator

Due to the domestic network problems, it is very slow to pull the Docker image, so we need to configure the accelerator to speed it up.

Add configuration:

vim /etc/docker/daemon.json
Copy the code
{
  "registry-mirrors": ["https://847pb1vj.mirror.aliyuncs.com"]."exec-opts": ["native.cgroupdriver=systemd"]."log-driver": "json-file"."log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
Copy the code

Docker:

systemctl daemon-reload
systemctl restart docker
systemctl enable docker
Copy the code

Disabling the Firewall

Systemctl stop firewalld # Disable systemctl disable firewalld # disable firewalldCopy the code

Disable SELinux

Modify /etc/selinux/config to set selinux to disabled and restart the server.

Disabling Swap Partitions

swapoff -a
vim /etc/sysconfig/kubelet
Copy the code

Change to: KUBELET_EXTRA_ARGS=”–fail-swap-on=false”

Install Kubernetes

Install kubelet, kubeadm, kubectl, Dashboard, add woker.

Add the source

Due to domestic network problems, the address in the official document is not available. This article is replaced with the address of Ali Cloud mirror:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
Copy the code
The installation
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
Copy the code
Modifying Network Configurations
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Copy the code

Note: So far, all the above operations need to be performed on Worker machines, including Docker installation.

Initialize the Master
Generate the initialization file
kubeadm config print init-defaults > kubeadm-init.yaml
Copy the code

There are three changes to this file:

  1. Change advertiseAddress: 1.2.3.4 to the local address (*** note the internal address ***)

  2. Will imageRepository: k8s. GCR. IO is modified to imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

  3. NodeRegistration :name, change to k8S-master (for woker nodes, change to another name)

After modification:

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168. 0237.
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96. 0. 0/ 12
scheduler: {}
Copy the code
Download mirror
kubeadm config images pull --config kubeadm-init.yaml
Copy the code

Enter the following information:

Perform initialization
kubeadm init --config kubeadm-init.yaml
Copy the code

After the command is executed, the following information is displayed:

W0328 21:31:14.954124   17338 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "k8s-master" could not be reached
	[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.125.1.250:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.237]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.237 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.237 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"W0328 21:31:18.364428 17338 MANIFESTS"Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"W0328 21:31:18.365138 17338 MANIFESTS"Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.501912 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "Kubelet - config - 1.18" in namespace kube-system with the configuration for the kubelets inthe cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as  control-plane by adding the label"node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.237:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:964c7b0c007ce17c979e631da17ad047dfa3bad76e407b6ee76d729ecf3cd9c7
Copy the code

Save the last two lines, kubeadm join… Is the command to be executed to join the Worker node.

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Copy the code

Note here: The installation may wait for a timeout, the error is as follows:

AdvertiseAddress in kubeadm-init.yaml is not configured as an Intranet, which causes network problems.

Next, configure the environment so that the current user can execute kubectl commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Test this: NotReady is used because the network has not been configured.

[root@ecs-babc-0004 kubernetes]$kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady master 2M25s v1.17.3Copy the code
Configure the network

Install calico Network plug-in:

[root @ ecs - babc - 0004 ~] $wget HTTP: / / https://docs.projectcalico.org/v3.9/manifests/calico.yaml/root @ ecs - babc - 0004 ~ $cat Kubeadm - init. Yaml | grep serviceSubnet: serviceSubnet: 10.96.0.0/12Copy the code

Vim calico.yaml, change 192.168.0.0/16 to 10.96.0.0/12

. It is important to note that the calico yaml of IP and kubeadm – init. Yaml need consistent, either initialize before modifying kubeadm – init. Yaml, or modify the calico. After initialization yaml.

Execute kubectl apply -f calico.yaml to initialize the network.

[root@ecs-babc-0004 kubernetes]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Copy the code

Check the node information in a few minutes. The master state is Ready.

[root@ecs-babc-0004 kubernetes]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   11m   v1.18.0
Copy the code

There is a pit here, half a cigarette was pit for a long time, and then I realized that it was because I could not pull calico/ CNI: V3.8.8 mirror, or pull very slow, after finding the pit, I decided to change the versionThe calico/the cni: v3.9.5It will be OK in a minute.

The following is the screening process:

First, execute kubectl get Pods -n kube-system to find pods are pending, and find that the mirror is pulling all the time, sometimes it can not pull down for an hour.

Abnormal conditions:

The normal situation would be:

[root@ecs-babc-0004 kubernetes]# kubectl get pods -n kube-system
NAME                                       READY   STATUS     RESTARTS   AGE
calico-kube-controllers-5fc5dbfc47-4gvcb 0/1 Pending 0 43s calico-node-gvnnc 0/1 Init:2/3 0 43s coredns-546565776c-sx5rj 0/1 Pending 0 6m52s coredns-546565776c-xtb92 0/1 Pending 0 6m52s etcd-k8s-master 1/1 Running 0 6m51s kube-apiserver-k8s-master 1/1 Running 0  6m51s kube-controller-manager-k8s-master 1/1 Running 0 6m51s kube-proxy-7jk4h 1/1 Running 0 6m52s kube-scheduler-k8s-master 1/1 Running 0 6m50sCopy the code

Kubectl describe Po calico-node-gvnnc-n kube-system

Events:
  Type    Reason     Age    From                 Message
  ----    ------     ----   ----                 -------
  Normal  Scheduled  2m14s  default-scheduler    Successfully assigned kube-system/calico-node-gvnnc to k8s-master
  Normal  Pulling    2m14s  kubelet, k8s-master  Pulling image "The calico/the cni: v3.9.5"
  Normal  Pulled     99s    kubelet, k8s-master  Successfully pulled image "The calico/the cni: v3.9.5"
  Normal  Created    98s    kubelet, k8s-master  Created container upgrade-ipam
  Normal  Started    98s    kubelet, k8s-master  Started container upgrade-ipam
  Normal  Pulled     97s    kubelet, k8s-master  Container image "The calico/the cni: v3.9.5" already present on machine
  Normal  Created    97s    kubelet, k8s-master  Created container install-cni
  Normal  Started    97s    kubelet, k8s-master  Started container install-cni
  Normal  Pulling    96s    kubelet, k8s-master  Pulling image "The calico/pod2daemon - flexvol: v3.9.5"
  Normal  Pulled     74s    kubelet, k8s-master  Successfully pulled image "The calico/pod2daemon - flexvol: v3.9.5"
  Normal  Created    74s    kubelet, k8s-master  Created container flexvol-driver
  Normal  Started    74s    kubelet, k8s-master  Started container flexvol-driver
  Normal  Pulling    74s    kubelet, k8s-master  Pulling image "The calico/node: v3.9.5"
  Normal  Pulled     47s    kubelet, k8s-master  Successfully pulled image "The calico/node: v3.9.5"
  Normal  Created    47s    kubelet, k8s-master  Created container calico-node
  Normal  Started    47s    kubelet, k8s-master  Started container calico-node
Copy the code
Install the Dashboard
Deploy the Dashboard
Wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml kubectl apply - f recommended.yamlCopy the code

Kubectl get Pods –all-namespaces to check the Pods status

[root@ecs-babc-0004 kubernetes]# kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS              RESTARTS   AGE
kube-system            calico-kube-controllers-5fc5dbfc47-4gvcb     1/1     Running             0          5m15s
kube-system            calico-node-gvnnc                            1/1     Running             0          5m15s
kube-system            coredns-546565776c-sx5rj                     1/1     Running             0          11m
kube-system            coredns-546565776c-xtb92                     1/1     Running             0          11m
kube-system            etcd-k8s-master                              1/1     Running             0          11m
kube-system            kube-apiserver-k8s-master                    1/1     Running             0          11m
kube-system            kube-controller-manager-k8s-master           1/1     Running             0          11m
kube-system            kube-proxy-7jk4h                             1/1     Running             0          11m
kube-system            kube-scheduler-k8s-master                    1/1     Running             0          11m
kubernetes-dashboard   dashboard-metrics-scraper-66b49655d4-bb6px   0/1     ContainerCreating   0          8s
kubernetes-dashboard   kubernetes-dashboard-74b4487bfc-5dw84        0/1     ContainerCreating   0          8s
Copy the code
Create a user

Create a dashboard-adminuser.yaml file with the following contents:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

Copy the code

Run kubectl apply -f dashboard-adminuser.yaml

Generate a certificate

The official document provides the login method of 1.7.X or later, but it is not clear. Half Smoke does not operate in accordance with the document. Go to the. Kube directory and execute three commands:

[root@ecs-babc-0004 .kube]# grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt [root@ecs-babc-0004 .kube]# grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d  >> kubecfg.key [root@ecs-babc-0004 .kube]# openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"Copy the code

The third command will prompt you to enter the password when generating the certificate, you can directly press enter twice to skip, and then check the file list, you can see three more certificate-related files:

* kubecfg.p12 * kubecfg.p12 * kubecfg.p12 * kubecfg.p12 * kubecfg.p12 * kubecfg.p12 * kubecfg.p12 * kubecfg.p12 * kubecfg.p12

scp [email protected]:/root/.kube/kubecfg.p12 ./
Copy the code

For MAC, after downloading the P12 file, double-click to install the certificate. Enter the certificate password when installing the certificate.

Now we can log in to the panel and access the address: IP} {k8s – master – https:// : 6443 / API/v1 / namespaces/kubernetes – dashboard/services/HTTPS: kubernetes – dashboard: / proxy / # / login, When you log in, you will be prompted to select the certificate. After confirming, you will be prompted to enter the current user name and password (note that it is the user name and password of the computer). {k8s-master-ip} indicates the external IP address of the server.

Then the Dashboard login screen pops up:

Login Dashboard

On the server. Kube directory performs kubectl -n kube – system the describe secret $(kubectl -n kube – system get secret | grep admin – user | awk ‘{print $1}’), get Token.

[root@ecs-babc-0004 .kube]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user  | awk '{print $1}') Name: admin-user-token-9gjgz Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: a70bac13-dc07-49d2-9f4d-4296654ad66f Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9 uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWRoaGt iIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2V hY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiMjBkMTE0My1jZTk0LTQzNzktOWUxNC04ZjgwZjA2ZDg0NzkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWF jY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.f6IbPGwIdFZWStzBj8_vmF01oWW5ccaCpPuVQNLSK1pgEqn0kNVK_x0RYSuKEnujObzpQQdFiRYcI6 ITHja2PIVc5Nv83VCn5IaLvZdYuGZWUYRw0efJUBMA4J4N8-pRkiw6fYAuWLeGYghLNXL_nDdC_JkG75ASqrr3U1MVaikOcfrEPaI-T_AJ3TMYhI8aFoKiER pumu5W1K6Jl80Am9pWDX0Ywis5SSUP1VYfu-coI48EXSptcaxEyv58PrHUd6t_oMVV9rpqSxrNtMZvMeXqe8Hnl21vR7ls5yTZegYtHXSc3PKvCaIalKhYXA uhogNcIXHaMzvLSbf-DSQkVwCopy the code

Copy the Token to the login page and click login. The main interface is as follows:

Add the Worker

Repeat install Docker, install Kubernetes, modify network configuration, initialize a Worker machine.

Run the following command to add the Worker to the cluster:

Kubeadm join 192.168.0.237:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:964c7b0c007ce17c979e631da17ad047dfa3bad76e407b6ee76d729ecf3cd9c7Copy the code
  • Note: The secret key is generated after initializing Master, refer to the previous section.

After the addition, check the status of Worker node on Master:

[root@ecs-babc-0004 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION ECs-babc-0006 Ready < None > 93s v1.18.0 K8s-master Ready Master 174M V1.18.0Copy the code

You can also view it on Dashboard:

conclusion

So far, the construction of Docker and Kubernetes on Centos has been completed, there are many steps, as long as you have patience, you can certainly complete the installation. You should know that half cigarette in the installation, after a dozen times of reinstallation system, almost collapse, ha ha.

If you have any questions, please feel free to communicate with half smoke (about me). Thank you for reading.

The original link

reference

Thanks to the following authors:

Install Docker on Linux

Build K8S from scratch with official documentation

Error link:

Check kubelet and kubeadm init

Kubernetes some error collection

K8s error message solution