instructions
This article uses simple examples to get beginners quickly into the world of Docker, Kubernetes (K8S) containers. Assuming that you already have a K8S cluster, otherwise you can quickly set up an experimental environment using Minikube or MiniShift.
Docker
DockerwithK8S
Docker is essentially a virtualization technology similar to KVM, Xen, and VMware, but it is lighter and relies on Linux Container Technology (LXC) when Docker is deployed in a Linux environment. One difference between Docker and traditional KVM and other virtualization technologies is that Docker has no kernel, that is, multiple Docker virtual machines share the host kernel. In short, Docker can be regarded as a virtual machine without kernel. Each Docker virtual machine has its own software environment and is independent of each other.
The relationship between K8S and Docker is what OpenStack is to KVM and vSphere is to VMware. K8S is a container cluster management system. Docker technology can be used for the virtualization of the underlying container. The application personnel do not need to directly deal with the underlying Docker nodes, but can be managed through K8S overall management.
Dockerbasis
–name test-docker busybox /bin/sh/docker busybox /bin/sh/docker busybox /bin/sh/docker busybox /bin/sh/docker busybox IO official image library (Registry) and save to the local, and then build a virtual machine named Test-Docker based on the image, whose official Docker terminology is named Container.
# docker run -it --name test-docker busybox /bin/sh
Unable to find image 'busybox:latest' locally
Trying to pull repository docker.io/library/busybox ...
latest: Pulling from docker.io/library/busybox
f70adabe43c0: Pull complete
Digest: sha256:186694df7e479d2b8bf075d9e1b1d7a884c6de60470006d572350573bfa6dcd2
/ #
Docker is lighter than traditional KVM and VMware virtual machines. As shown below, the test-docker container does not run any additional system or kernel processes. It only runs the /bin/sh process provided by the Docker run command:
/ # ps -ef
PID USER TIME COMMAND
1 root 0:00 /bin/sh
7 root 0:00 ps -ef
If you create a virtual machine in OpenStack, you first need to store the virtual machine image in the Glance image library, then you can select the image to create the virtual machine. Docker is the same, and officially provides a shared Image Registry, in which all kinds of images are stored. If this example creates a container with the BusyBox image, and its image is pulled to the local area, you can execute the following command to check that it is only about 1MB, which is quite lightweight.
# docker images | grep busybox docker. IO/busybox latest eight ac48589692a five weekes line 1.146 MB
Through this section, we have learned three basic elements of Docker: Image is stored in the Image repository, which contains the software environment needed for the program to run. When the Container is deployed, the Image is pulled to the Doker host through the network.
Kubernetes
K8S is Google’s open source container cluster management system, which is derived from Google’s internal management system Borg. The following will lead beginners to get familiar with K8S cluster through a simple and coherent example.
Pod
K8s takes POD as the minimum unit to schedule and manage Docker containers, in which a Pod can contain multiple containers, and the containers in the same Pod share the local network, and the containers can be visited each other through the localhost address, that is, the containers are deployed on the same host. The scheduling with POD as the minimum unit indicates that the containers in POD are scheduled to the same Docker node.
As shown below, create a POD named myHttp that contains a container named myHttp deployed using the HTTPD image:
# cat > /tmp/myhttpd.pod <<EOF
apiVersion: v1
kind: Pod
metadata:
name: myhttp
labels:
app: myhttp
spec:
containers:
- name: myhttp
image: httpd
EOF
% kubectl create -f /tmp/myhttpd.pod
Execute the kubectl get pod command to see that pod runs successfully, and then verify that the container can provide web services:
# kubectl get pod NAME READY STATUS RESTARTS AGE myhttp 1/1 Running 0 1h # kubectl describe pod myhttp|grep IP IP: 10.109.0.232 # curl 10.109.0.232 < HTML ><body><h1>It works! </h1></body></html>
Deployment
It is rare to deploy an application directly in the form of POD. The main reasons are: POD cannot provide elastic scaling, and K8S cannot dispatch it to the surviving node in case of node failure, and it lacks self-healing ability. For this reason, applications often use “RC/Deployment” deployments, and in the new version of K8S, it is officially recommended to use Deployment instead of RC Deployment Stateless applications.
After executing kubectl delete pod myhttp to delete pod, switch to Deployment:
# cat > myhttp.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: myhttp
name: myhttp
spec:
replicas: 1
selector:
matchLabels:
app: myhttp
template:
metadata:
labels:
app: myhttp
spec:
containers:
- image: httpd
name: myhttp
EOF
# kubectl create -f /tmp/myhttp.yaml
The.spec.replicas in Deployment indicates how many PODs are deployed, as in this example there is currently only one POD:
# kubectl get deploy,pod
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/myhttp 1 1 1 1 2m
NAME READY STATUS RESTARTS AGE
po/myhttp-7bc6d8b87c-gzlkq 1/1 Running 0 2m
After executing kubectl delete pod
# kubectl delete pod myhttp-7bc6d8b87c-gzlkq # kubectl get pod -w NAME READY STATUS RESTARTS AGE myhttp-7bc6d8b87c-dhmtz 0/1 ContainerCreating 0 2s myhttp-7bc6d8b87c-dhmtz 1/1 Running 0 8s myhttp-7bc6d8b87c-gzlkq 1/1 Terminating 0 8m
When you need to scale or scale your application, you need to delete or create PODs if you deploy them as PODs, and if you deploy them as Deployment, you just need to adjust.spec.replicas, and then the K8S mirror controller automatically adjusts the number of PODs. As shown below, extend HTTP application for 2 services:
# kubectl scale deploy/myhttp --replicas=2
# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
myhttp-7bc6d8b87c-cj4g8 0/1 ContainerCreating 0 3s
myhttp-7bc6d8b87c-zsbcc 1/1 Running 0 8m
myhttp-7bc6d8b87c-cj4g8 1/1 Running 0 18s
# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
myhttp 2 2 2 2 21m
Kubectl delete pod
# # kubectl get pod kubectl describe pod myhttp - 7 bc6d8b87c - cj4g8 | grep IP IP: 10.129.3.28
Service
Service is similar to traditional hardware load balancing such as F5 and A10, but it is implemented by software in K8S, and can track the back-end Server in real time when scaling applications, without manual adjustment.
Internal access
We will create a Service Service for the MyHttp application deployed in the previous section, but before that, we will create a POD as the cluster internal client for subsequent Service validation. To ensure that the POD will run all the time, the command is used to execute the infinite loop command in the front end. To ensure that the POD will run all the time, the command is used to execute the infinite loop command in the front end.
# kubectl create -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: myclient
labels:
app: myclient
spec:
containers:
- name: myclient
image: centos
command: ['sh','-c','while true; do sleep 3600; done;']
EOF
Create a myhttp-int service for the myHttp application with the following command:
# kubectl expose deployment myhttp --port=8080 --target-port=80 --name=myhttp-int
service "myhttp-int" exposed
The above command is equivalent to manually creating a Service using the following YAML file: Create a service named myhttp-int with port 8080 pointing to port 80 of the backend service, which is observed in the MyHttp Deployment by selecting the Pod with label app: myHttp via the selector, Can be found. The spec. The template. The metadata. Labels defined label is app: myhttp, so, through the myhttp – int: 8080 to be able to access myhttp service.
apiVersion: v1
kind: Service
metadata:
labels:
app: myhttp
name: myhttp-int
spec:
clusterIP:
ports:
- port: 8080
protocol: TCP
targetPort: 80
selector:
app: myhttp
sessionAffinity: None
When you access the Service in the test container via myhttp-int:8080, you can find that the load is balanced on the two PODs at the back end:
# kubectl get pod NAME READY STATUS RESTARTS AGE myclient 1/1 Running 0 1h myhttp-7bc6d8b87c-cj4g8 1/1 Running 0 1d Myhttp-7bc6d8b87c-zsbcc 1/1 Running 0 1d # # kubectl exec myhttp-7bc6d8b87c-cj4g8-it -- sh-c "hostname>htdocs/index.html" # kubectl exec myhttp-7bc6d8b87c-zsbcc -it -- sh -c "hostname>htdocs/index.html" # kubectl exec -it myclient -- curl myhttp-int:8080 myhttp-7bc6d8b87c-cj4g8 # kubectl exec -it myclient -- curl myhttp-int:8080 myhttp-7bc6d8b87c-zsbcc
When scaling the POD, we can observe that the Service will dynamically trace the Endpoints Service with the following command:
# kubectl get endpoints myhttp - int the NAME endpoints AGE myhttp - int 10.129.0.237:80,10.129. 3.28:1 h # 80 kubectl scale deploy myhttp --replicas=3 # kubectl get endpoints myhttp-int NAME ENDPOINTS AGE myhttp-int 10.129.0.237:80,10.129. 3.28:80,10.131. 0.194:1 h 80
External access
If an application needs to provide services outside the K8S cluster, then it can create a Service of type NodePort. At this time, all nodes in the K8S cluster are listening on the port specified by NodePort, so external applications can access the services provided inside the cluster through any node in the cluster.
# kubectl create -f - <<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: myhttp
name: myhttp-pub
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
protocol: TCP
targetPort: 80
selector:
app: myhttp
sessionAffinity: None
EOF
Execute the following command to check the service and find that one is of type ClusterIP and the other is of type NodePort, but both have been assigned a ClusterIP address:
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myhttp-int ClusterIP 172.30.37.43 <none> 8080/TCP 1h
myhttp-pub NodePort 172.30.6.69 <none> 8080:30001/TCP 3m
The myhttp-pub service opens the host port of each node in the cluster via nodePort. At this point, the service can be accessed from any node in the cluster:
# curl 192.168.220.21:30001 myhttp-7BC6d8b87c-zsbcc # curl 192.168.230.21:30001 myhttp-7BC6d8b87c-zsbcc # curl Myhttp 192.168.240.21:30001-7 bc6d8b87c - cj4g8
Service of type NodePort can expose the Service to the outside of the cluster, but the problem is: the number of ports is limited (limited to 3000-32767), and if the node fails, the access to the Service through this node will fail. For this reason, the NodePort type of Service is not commonly used, and instead Ingress’s technology is used to expose the Service to the outside of the cluster, but for simplicity, Ingress is not covered in this article.
Configmap
If you need to customize the httpd.conf file of the HTTPD Image, you should not log in to each Container directly to modify the configuration. Instead, you should consider using the ConfigMap2 technology provided by K8s. It shares PODs as files created by the central storage repository.
For simplicity’s sake, as shown below, let’s arbitrarily create a file and mount it into Deployment, modify ConfigMap, extend Deployment, and use it to explain the role of ConfigMap.
Create a cm3 named my-config:
# kubectl create -f - <<EOF apiVersion: v1 metadata: name: my-config data: hosts: | 127.0.0.1 localhost localhost. Localdomain # : : 1 localhost localhost. Localdomain kind: ConfigMap EOF
Executed Kubectl Edit Deploy MyHttp to modify Deployment and mount CM into the /etc/myhosts directory. The complete YAML file looks like this (PS: Add Volumemounts and Volume) :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: myhttp
name: myhttp
spec:
replicas: 1
selector:
matchLabels:
app: myhttp
template:
metadata:
labels:
app: myhttp
spec:
containers:
- image: httpd
name: myhttp
volumeMounts:
- name: config-hosts
mountPath: /etc/myhosts
volumes:
- name: config-hosts
configMap:
name: my-config
After you modify Deploy, you can see that PODS will be automatically rebuilt, and then check the hosts file containing CM in each POD discoverable directory:
# kubectl get pod
NAME READY STATUS RESTARTS AGE
myhttp-774ffbb989-gz6bd 1/1 Running 0 11m
myhttp-774ffbb989-k8m4b 1/1 Running 0 11m
myhttp-774ffbb989-t74nk 1/1 Running 0 11m
# kubectl exec -it myhttp-774ffbb989-gz6bd -- ls /etc/myhosts
hosts
# kubectl exec -it myhttp-774ffbb989-gz6bd -- cat /etc/myhosts/hosts
127.0.0.1 localhost localhost.localdomain
#::1 localhost localhost.localdomain
Modify CM, and after a few minutes, you can see that the configuration in POD is automatically updated:
# kubectl edit cm my-config ... Data: hosts: | 127.0.0.1 localhost localhost. Localdomain: : 1 localhost localhost. Localdomain... # kubectl exec-it myhttp-774ffbb989-gz6bd -- cat /etc/myhosts/hosts 127.0.0.1 localhost localhost ::1 localhost localhost.localdomain
Expand the application and then check the new POD and find that it contains CM content:
# kubectl scale deploy myhttp --replicas=4 # kubectl get pod myhttp-774ffbb989-gz6bd 1/1 Running 0 15h myhttp-774ffbb989-k8m4b 1/1 Running 0 15h myhttp-774ffbb989-t74nk 1/1 Running 0 15h myhttp-774ffbb989-z5d6h 1/1 Running 0 21s # kubectl exec-it myhttp-774ffbb989-z5d6h -- cat /etc/myhosts/hosts 127.0.0.1 localhost localhost ::1 localhost localhost.localdomain
Secret
Compared with ConfigMap, which is used to save plaintext, Secret stores ciphertext, such as user password and other sensitive data, which can be encrypted with Secret. As shown below, we create a Secret to encrypt the user and password and provide it to the container for use.
The Opaque Secret data is a MAP type, and the value is required to be in base64 encoded format. Encrypt user and password:
# echo -n root | base64
cm9vdA==
# echo -n Changeme | base64
Q2hhbmdlbWU=
Create a secret named userPWD-secret that contains the user and password:
# kubectl create -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: userpwd-secret
type: Opaque
data:
username: cm9vdA==
password: Q2hhbmdlbWU=
EOF
Update Deployment to mount the secret in volume:
# kubectl edit deployment myhttp
...
spec:
...
spec:
containers:
- image: httpd
...
volumeMounts:
- name: userpwd
mountPath: /etc/mysecret
...
volumes:
- name: userpwd
secret:
secretName: userpwd-secret
...
If you log in to the container, you can find that the key in the secret is saved as a file with the contents of value, but it has been decrypted correctly in the container:
# kubectl exec -it myhttp-64575c77c-kqdj9 -- ls -l /etc/mysecret lrwxrwxrwx. 1 root root 15 May 17 07:01 password -> .. data/password lrwxrwxrwx. 1 root root 15 May 17 07:01 username -> .. data/username # kubectl exec -it myhttp-64575c77c-kqdj9 -- cat /etc/mysecret/username root
Storage
We save our web applications to external storage and then mount them on a Pod so that our published applications are not lost regardless of whether the Pod is rebuilt or scaled.
configurationNFSstorage
For simplicity, this example uses NFS as shared storage:
NFS Server Installation Software:
# yum install nfs-utils
Configure the shared directory:
# mkdir -p /exports/httpd
# chmod 0777 /exports/*
# chown nfsnobody:nfsnobody /exports/*
# cat > /etc/exports.d/k8s.exports <<EOF
/exports/httpd *(rw,root_squash)
EOF
Configure firewall, release NFS port:
# firewall-cmd --add-port=2049/tcp
# firewall-cmd --permanent --add-port=2049/tcp
Configure SELinux to allow Docker to write data to NFS:
# getsebool -a|grep virt_use_nfs
# setsebool -P virt_use_nfs=true
Start the NFS service:
# systemctl restart nfs-config
# systemctl restart nfs-server
# systemctl enable nfs-server
K8SCluster usage storage
Install NFS client software on each node of K8S cluster, and set SELinux permissions:
# yum install nfs-utils
# setsebool -P virt_use_nfs=true
Create a PersistentVolume of type NFS: PersistentVolume (PV) that points to the NFS backend store:
# kubectl create -f - <<EOF apiVersion: v1 kind: PersistentVolume metadata: name: httpd spec: accessModes: Gi NFS - ReadWriteMany capacity: storage: 1: path: / exports/HTTPD server: 192.168.240.11 persistentVolumeReclaimPolicy: Retain EOF
Create a persistent volume declaration that PersistentVolumeClaim (PVC) points to the PV created in the previous step:
# kubectl create -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: httpd
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: httpd
EOF
Check that PVC/HTTPD is bound to PV/HTTPD:
# oc get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM ...
pv/httpd 1Gi RWX Retain Bound demo/httpd ...
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/httpd Bound httpd 1Gi RWX 53s
Rebuild Deployment and add Volume and mount points:
# kubectl delete deploy myhttp
# kubectl create -f - <<EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: myhttp
name: myhttp
spec:
replicas: 3
selector:
matchLabels:
app: myhttp
template:
metadata:
labels:
app: myhttp
spec:
containers:
- image: httpd
name: myhttp
imagePullPolicy: IfNotPresent
volumeMounts:
- name: config-hosts
mountPath: /etc/myhosts
- name: userpwd
mountPath: /etc/mysecret
- name: httpd-htdocs
mountPath: /usr/local/apache2/htdocs
volumes:
- name: config-hosts
configMap:
name: my-config
- name: userpwd
secret:
secretName: userpwd-secret
- name: httpd-htdocs
persistentVolumeClaim:
claimName: httpd
EOF
After POD is generated, check that the NFS directory is mounted into the container:
# kubectl get pod # kubectl exec -it myhttp-8699b7d498-dlzrm -- df -h Filesystem Size Used Avail Use% Mounted on ... 192.168.240.11:37 g/exports/HTTPD 17 g 21 g 44% / usr/local/apache2 / htdocs... # kubectl exec-it myhttp-8699b7d498-dlzrm -- ls htdocs # The current directory is empty
Log in to any container and publish the web application to the htdocs directory:
# kubectl exec-it myhttp-8699b7d498-dlzrm -- /bin/sh # echo "this is a test of pv" > htdocs/index.html #
If we delete the container or extend the container, we will find that the htdocs in the container contains the published application:
# kubectl delete pod-l app=myhttp # kubectl get pod # kubectl exec-it myhttp-8699b7d498-6q8tv -- cat htdocs/index.html this is a test of pv
Satefulset
For example, the MyHTTP application created with Deplyment above is stateless, the host name is randomly and dynamically assigned, and all PODS can share the same mount volume. However, such as Kafaka and ZooKeeper cluster, it is stateful and the host name needs to be determined as one. K8S provides SatefulSet technology to meet the needs of such applications.
As shown below, we will use the NGINX image to create a stateful cluster to illustrate StatefulSet usage.
Instead of Deployment, we must first create a Service that has ClusterIP: None:
# kubectl create -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: nginx-web
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx-web
EOF
This Service has no clusterIP, which means that we cannot directly access the back-end Service through this Servcie.
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web ClusterIP None <none> 80/TCP 3s
Create a stateful service named nginx with a mirror number of 2, and note that the ServiceName is configured as the SVC created in the previous step:
# kubectl create -f - <<EOF
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nginx
spec:
serviceName: web
replicas: 2
template:
metadata:
labels:
app: nginx-web
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
EOF
If you observe the POD startup, you can find that the POD name is NGINX-N format 4, this name is fixed and unique, and you can find that the POD is started sequentially, that is, the container NGINX-N is started after NGINX -< N-1 >.
# kubectl get pod -w NAME READY STATUS RESTARTS AGE nginx-0 0/1 ContainerCreating 0 7s nginx-0 1/1 Running 0 10s nginx-1 0/1 Pending 0 0s nginx-1 0/1 Pending 0 0s nginx-1 0/1 ContainerCreating 0 1s nginx-1 1/1 Running 0 13s
The created service is used by StatefulSet on DNS to track the POD name:
# kubectl run-i --tty --image busybox ds-test --restart=Never --rm /bin/sh # NSLookup Web # Find a Web service that has two PODs on the back end... Name: web Address 1:10.129.0.248 nginx - 0. Web. Demo. SVC. Cluster. The local Address 2: 10.131.0.200 nginx - 1. Web. Demo. SVC. Cluster. The local # nslookup nginx - 0. Web # validation pod name the IP address of the corresponding... Name: nginx-0.web.demo.svc.cluster.local Address 1: 10.129.0.248 nginx - 0. Web. Demo. SVC. Cluster. The local # nslookup nginx - 1. The web... Name: nginx - 1. Web. Demo. SVC. Cluster. The local Address 1:10.131.0.200 nginx - 1. Web. Demo. SVC. Cluster. The local
Configure the SatefulSet mount volume:
# kubectl delete statefulset nginx # kubectl create -f - <<EOF apiVersion: statefulset # kubectl create -f - <<EOF apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: nginx spec: serviceName: web replicas: 2 template: metadata: labels: app: nginx-web spec: containers: - name: nginx image: nginx ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: glusterfs-raid0 resources: requests: storage: 10Mi EOF
Note: In volumeClaimTemplates. Add the storageClassName spec, it specifies the store called glusterfs – raid 0, so, when the pod is generated, K8S uses dynamic provision5 to create PVCs, PVs and automatically dynamically allocate volumes from the storage pool glusterfs-raid0. Of course, if you use the NFS Storage configured in the Storage section, you need to delete StorageClassName here, and then manually create the Storage, PV, PVC.
Check:
# The following volumes are automatically created from GlusterFS by K8S using the dynamic offer: # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE www-nginx-0 Bound pvc-4a76e4a9... 1Gi RWO glusterfs-raid0 22h www-nginx-1 Bound pvc-536e8980... 1Gi RWO glusterfs-raid0 22h # kubectl get statefulset,pod NAME DESIRED CURRENT AGE statefulsets/nginx 2 2 22h NAME READY Nginx-1 Running 0 32h Po /nginx-1 32h Running 0 32h # Starts start AGE PO /nginx-1 32h Running 0 32h # # kubectl exec-it nginx-0 -- df-h Filesystem Size Used Avail Use% Mounted on 192.168.220.21:vol_e6858... 1016M 33M 983M 4% /usr/share/nginx/html # kubectl exec -it nginx-1 -- df -h Filesystem Size Used Avail Use% Mounted on 192.168.220.21: vol_c659cc... 1016M 33M 983M 4% /usr/share/nginx/html
Namespace
Careful readers will see demo/ HTTPD in the Storage section, which is the Namespace/Project6 used by the author. Just as OpenStack cloud computing platform provides multi-tenancy purpose, and each tenant can create its own Project, K8S also provides multi-tenancy function, and we can create different namespaces. And restrict the Pod, Service, Configmap, and so on shown above to namespaces.
The newly built K8S cluster defaults to the following two namespaces:
# kubectl get namespace NAME DISPLAY NAME STATUS default Active # kube-system Active # k8s
We can create a namespace by executing the following command:
# kubectl create namespace demo
namespace "demo" created
Then, the kubectl command can be executed with the “-n
” parameter. Query POD as shown below:
# kubectl get pod -n demo
NAME READY STATUS RESTARTS AGE
nginx-0 1/1 Running 0 23h
nginx-1 1/1 Running 0 23h
Finally, for the OpenShift platform, we can execute the following command to log in to a Namespace so that we don’t have to attach “-n < Namespace >” every time.
# oc project demo
# oc get pod
NAME READY STATUS RESTARTS AGE
nginx-0 1/1 Running 0 23h
nginx-1 1/1 Running 0 23h
conclusion
Through this article, we have learned the core knowledge of Docker and K8S, and I believe readers should be able to skillfully use the K8S platform.
- The mirror format is:<image_name>:<image_tag>And if he does not writeimage_tag, the default islatest tag ↩
- Refer to official documentation:Configure a Pod to Use a ConfigMap.↩
- Content iskey:valueFormat, and onecmCan contain more than one↩
- statefulsetThe rules for generating names are fixed:<statefulset-name>-n ↩
- Storage must support dynamic provisionality, such as GlusterFS storage, which must be configured to support dynamic provisionalityheketi;↩
- OpenshiftPlatform, both theProjectIs theK8StheNamespace ↩