Cabbage Java self study room covers core knowledge
1. The Rancher
Rancher is a container management platform for companies that use containers. Rancher simplifies the process of using Kubernetes, enabling developers to Run Kubernetes Everywhere, meeting IT requirements specifications, and empowering DevOps teams.
2. Docker is configured with Aliyun image acceleration
Ali Cloud Docker official image acceleration help document: Ali cloud Docker official image acceleration help document
To get a mirror address, the prefix.mirror.aliyuncs.com is different for each account:
https://xxxxxx.mirror.aliyuncs.com
Copy the code
Use the accelerator by modifying the daemon configuration file /etc/docker-daemon. json as prompted:
Remember to restart the Docker service after configuration:
Sudo systemctl restart docker restart or sudo service docker restartCopy the code
3. Install Rancher 2.5.8
sudo docker run --privileged -d --restart=unless-stopped -p 81:80 -p 444:443 rancher/rancher
Copy the code
Nginx-ingress: -p 81:80 -p 444:443: nginx-ingress: -p 81:80 -p 444:443: nginx-ingress: -p 81:80 -p 444:443
Port 80, 81, 443, 444:
firewall-cmd --zone=public --add-port=80-81/tcp --permanent
firewall-cmd --zone=public --add-port=443-444/tcp --permanent
Copy the code
Reload the firewall rule to take effect immediately:
firewall-cmd --reload
Copy the code
Once the container is up and running successfully, open the url and use port 444 to do some initialization:
Https:// (your host IP address) :444Copy the code
After the initial password and service IP address are set, a K3S cluster named Local automatically exists on the main screen.
K3s is a lightweight Kubernetes that is easy to install and is a highly available, CNCF certified Kubernetes distribution designed for unattended, resource-constrained, remote areas or production workloads inside iot devices. K3s are packaged into a single binary of less than 60MB, reducing the dependencies and steps required to run the installation, run, and automatically update the production Kubernetes cluster.
Special note: don’t rush to create your own K8S cluster at this point, because many image downloads will be blocked, so you need to set the image source first.
Select the Settings menu, go to System-default-Registry, and add the following address:
registry.cn-hangzhou.aliyuncs.com
Copy the code
4. Create a single-node K8S cluster
Go back to your home screen and click Add Cluster to create a k8S Cluster:
Here we just need to select Existing Nodes:
You are advised to use the default Settings for everything else. Then remember the k8S version number, which is useful later:
√ all the permissions here, because these permissions are required on a single node:
Then the page will automatically generate the registration command according to your Settings, copy the command to your command line, and then click OK:
In this case, the system automatically returns to the main screen. Since it is the first creation, you need to wait a long time:
(The author tried several times, port conflict, mirror source is not matched, do not care about the inconsistent name in the screenshot)
Note: The first time you create a new Rancher, you will download a lot of images. During the process, you will see the following error. However, you have set the image source in advance, and the test is absolutely available, so don’t panic.
When all is done, return to the main screen and our K8S cluster is up and running:
Click On Explorer and you’ll find a Dashboard similar to our k8S:
Nginx-ingress: select namespace from k8S -> nginx-ingress: select namespace from k8S -> System:
The nginx-ingress service started on port 80 and port 443. The nginx-ingress service started on port 80 and port 443. The nginx-ingress service started on port 80 and port 443.
After setting up our K8S cluster, the host machine also needs to be connected to Kubectl and HelM3, so make sure to finish reading this article.
5. Host computer access kubectl and HelM3
Note that our k8S version is V1.20.8, so download note to modify the version number:
The curl - LO, https://dl.k8s.io/release/v1.20.8/bin/linux/amd64/kubectlCopy the code
Copy directly to install:
cp kubectl /usr/local/bin
Copy the code
Notice that it is given executable permission:
chmod 755 /usr/local/bin/kubectl
Copy the code
Kubectl version number: kubectl version number:
kubectl version
Copy the code
Kube-config = kube-config = kubernetes; kube-config = kube-config;
Create a new config file and place it in the following location: ~/.kube/config:
Kubectl = kubectl; kubectl = kubectl;
kubectl get nodes
Copy the code
kubectl get pods --all-namespaces
Copy the code
Now let’s download and install HELM3:
Wget HTTP: / / https://get.helm.sh/helm-v3.6.3-linux-amd64.tar.gzCopy the code
Unzip the file, just unzip it:
Tar - ZXVF helm - v3.6.3 - Linux - amd64. Tar. GzCopy the code
Copy directly to install:
cp linux-amd64/helm /usr/local/bin
Copy the code
Notice that it is given executable permission:
chmod 755 /usr/local/bin/helm
Copy the code
Check the helm version number and see if it is successful:
helm version
Copy the code
Here, basically k8S cluster has been set up successfully, we might as well try the effect, browser directly enter your IP:
Https://(your host IP address)Copy the code
The nGINx interface appears, indicating that it has worked, the next is the operation of the K8S deployment project, here will not be described.