Vagrant creates three hosts
All operations are performed as user root
The host | ip |
---|---|
node1 | 192.168.56.101 |
node2 | 192.168.56.102 |
node3 | 192.168.56.103 |
Change the SSH login mode of the three hosts
vim /etc/ssh/sshd_config
Copy the code
Modify three values
PermitRootLogin yes
PubkeyAuthentication yes
PasswordAuthentication yes
Copy the code
Restart the SSHD service
service sshd restart
Copy the code
Since I do not know the password of Vagrant root, I will manually change the password of root
passwd
Copy the code
Configure keyless logins on node1 and execute them under root account
ssh-keygen
Copy the code
Enter all the way
Copy the public key to another host node2
SSH - copy - id [email protected]Copy the code
Repeat the previous steps to copy to node3 and Node1
Install pip3
apt update && apt install python3-pip
Copy the code
Cloning kubespray
Git clone git checkout https://github.com/kubernetes-sigs/kubespray.git in release 2.16Copy the code
Follow the steps on the official website
# Install dependencies from ``requirements.txt``
sudo pip3 install -r requirements.txt
# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster
# Update Ansible inventory file with inventory builder
#Change the IP address to its ownDeclare - a IPS = (192.168.56.101 192.168.56.102 192.168.56.103) CONFIG_FILE = the inventory/mycluster/hosts. The yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
# Review and change parameters under ``inventory/mycluster/group_vars``
cat inventory/mycluster/group_vars/all/all.yml
#This file contains the basic configuration of the K8S cluster, such as the network, which can be modified
cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
Copy the code
Dashboard installation
kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
Copy the code
Access token
#Execute this command on the first Master node
#The administrator token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}')
#Common User Token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kuboard-viewer | awk '{print $1}')
Copy the code
Access to the dashboard
kubectl get svc -n kube-system
## output[root@instance-ji0xk9uh-1 ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 < None > 53/UDP,53/TCP,9153/TCP 65M kuboard NodePort 10.96.208.85 < None > 80:32567/TCP 4m6sCopy the code
To access the board, access port 32567 of any Worke node