Configuration requirements
- 3 ECS with 2 core and 4G
- CentOS 7.6
The software version after installation is as follows:
- Kubernetes 1.17.0
- Docker 19.03.5
Prepare ECS
Prepare three ECS, change the host name, and configure hosts. The following operations are performed on all three machines.
-
Run the hostnamectl command to set the hostname, change the hostname of machine 1 to master, and change the names of machine 2 and machine 3 to worker1 and worker2
sudo hostnamectl set-hostname masterCopy the code
sudo hostnamectl set-hostname worker1Copy the code
sudo hostnamectl set-hostname worker2Copy the code
-
To configure hosts, open the /etc/hosts file in a text editor and add the following configuration
192.168.0.154 master 192.168.0.155 worker1 192.168.0.156 worker2Copy the code
192.168.0.154, 192.168.0.155, and 192.168.0.156 are the Intranet IP addresses of the three machines respectively
-
Restart the instance to accept the new host name
sudo rebootCopy the code
-
Log in to the instance to verify that the host name has been updated
hostnameCopy the code
-
Check whether the firewall is disabled. If yes, disable it
firewall-cmd --stateCopy the code
-
Check whether swap is disabled. If yes, disable swap
free -gCopy the code
-
Check if selinux is disabled, and if it is, turn it off
getenforceCopy the code
Install the Docker
Uninstall the old version of Docker
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engineCopy the code
Run the following command to install the dependency packages:
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2Copy the code
In view of domestic network problems, it is strongly recommended to use the domestic source, please check the official source in the notes. Run the following command to add the yum software source:
$ sudo yum-config-manager \
--add-repo \
https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
# the official source
# $ sudo yum-config-manager \
# --add-repo \
# https://download.docker.com/linux/centos/docker-ce.repoCopy the code
Update the yum software source cache and install docker-CE.
sudo yum makecache fast
sudo yum install docker-ceCopy the code
Start the Docker service
sudo systemctl enable docker
sudo systemctl start dockerCopy the code
To create the daemon. Json
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]."log-driver": "json-file"."log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"."storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOFCopy the code
You can modify daemon configuration files to use accelerators, such as aliyun’s mirror accelerators
Restart the docker
sudo systemctl daemon-reload
sudo systemctl restart dockerCopy the code
Modify/etc/sysctl. Conf
Add the following content to /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1Copy the code
Then, execute the command to apply
sudo sysctl -pCopy the code
Install kubelet, kubeadm, kubectl
Configure the K8S yum source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOFCopy the code
Install kubelet kubeadm kubectl
sudo yum install -y kubelet kubeadm kubectlCopy the code
Start the kubelet
sudo systemctl enable kubelet
sudo systemctl start kubeletCopy the code
Initialize the master node
The kubeadm init command is used to initialize the master on the machine where the master resides
header | header |
---|---|
apiserver-advertise-address | The API server notifies it of the IP address it is listening for, and a listening address of “0.0.0.0” indicates all IP addresses on the machine. |
pod-network-cidr | Specifies the IP address range of the POD network. If set, the control plane automatically assigns CIDRs to each node |
service-cidr | Use a different IP address for service. (the default 10.96.0.0/12) |
We then specify options for initialization:
Run the following command to initialize the master node
Sudo kubeadm init \ --kubernetes-version=v1.17.0 \ --apiserver-advertise-address=192.168.0.154 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers"Copy the code
If the following error occurs
nfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)Copy the code
Check whether the IP address of apiserver-advertise-address is the IP address of the master
After the master node is initialized, notice the output at the end of the command line, which will be used later
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.154:6443 --token xf6jwp.qwabzranq2q8ptwb \
--discovery-token-ca-cert-hash sha256:a62cf69bd5a6ea6ac90e8eff936e5770eaa3bfaf44ec2bdd76f1a5c391ab280bCopy the code
Run the command output after kubeadm init
After the cluster master node is started, we need to use Kubectl to manage the cluster. Before starting, we need to set up its configuration file for authentication.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configCopy the code
Install the Flannel network plug-in
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlCopy the code
Checking cluster Status
After the installation is complete, we can use the following command to check that the cluster components are working properly:
kubectl get csCopy the code
Add a node to a cluster
Find the “kubeadm Join “script printed when starting the master node with “kubeadm init” and run it on machine 2 and machine 3 respectively
Kubeadm join 10.0.0.78:6443 --token zv6zpw.oyx2u2rhnrq6xvqk \ --discovery-token-ca-cert-hash sha256:c8f59b16ea300f10450e9a6adc152509b20a1b0f3ece9cc3d86ab1530afe2ca6Copy the code
Check the initialization result
Execute on the master node
kubectl get nodesCopy the code
The following output is displayed:
NAME STATUS ROLES AGE VERSION Master Ready master 28m v1.17.0 worker1 Ready <none> 8M59s v1.17.0 worker2 NotReady <none> 13 s v1.17.0Copy the code