preface
Containerization, cloud native is getting more and more intense, and there are many new concepts. With the explosion of information comes layers of fog. I tried to understand its veins from the perspective of capacity expansion. After practical exploration, I organized and formed an introductory course, including the following four articles.
- From Docker to IStio – containerize the application using Docker
- From Docker to IStio ii – Deploy the application using Compose
- From Docker to IStio iii – Kubernetes choreography application
- From Docker to ISTIO four – ISTIO Management application
This is the third article, Kubernetes choreography application.
kubernetes
Kubernetes is an open source, used to manage the cloud platform on multiple host containerized applications,Kubernetes goal is to make the deployment of containerized applications simple and efficient (powerful),Kubernetes provides a mechanism for application deployment, planning, update, maintenance.
Kubernetes means captain or navigator in Greek, which coincides with its role in Container cluster management, that is, as the commander of a large number of ships carrying containers, responsible for global scheduling and operation monitoring. Because Kubernetes has eight letters between k and S, it is also called K8s for short
For a quick k8S experience, use the k8S integrated with Docker for MAC.
After starting K8S, wait for its initialization to complete, then docker PS can see that K8S starts a series of containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
17a693617137 docker/kube-compose-controller "/ compose - controller..." 3 days ago Up 3 days k8s_compose_compose-74649b4db6-szsqz_docker_4f5997b7-5c47-11e9-95b9-025000000001_0
a9b666b48815 docker/kube-compose-api-server "/ API - server -- kubec..."3 days ago Up 3 days k8s_compose_compose-api-5d754cdd89-ncwrq_docker_131b4d65-04e7-11e9-837c-025000000001_0 f4b05eefc73a 6f7f2dc7fab5"/ sidecars - v = 2 - lo..." 3 days ago Up 3 days k8s_sidecar_kube-dns-86f4d74b45-zh6qc_kube-system_f669bc59-04e6-11e9-837c-025000000001_0
867f8f040258 c2ce1ffb51ed "/ dnsmasq nanny - v = 2..." 3 days ago Up 3 days k8s_dnsmasq_kube-dns-86f4d74b45-zh6qc_kube-system_f669bc59-04e6-11e9-837c-025000000001_0
17f26a6e91d2 80cc5ea4b547 "/ kube - DNS domain =..." 3 days ago Up 3 days k8s_kubedns_kube-dns-86f4d74b45-zh6qc_kube-system_f669bc59-04e6-11e9-837c-025000000001_0
...
Copy the code
Kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Copy the code
Kubectl get nodes kubectl get nodes
NAME STATUS ROLES AGE VERSION
docker-for-desktop Ready master 123d v1.10.11
Copy the code
Kubectl get service kubectl get service kubectl get service
NAME TYPE cluster-ip external-ip PORT(S) AGE Kubernetes ClusterIP 10.96.0.1 < None > 443/TCP 123DCopy the code
Deploy the application and test
Write application deployment files
1. Flaskapp filek8s/flaskapp.yaml
apiVersion: v1 kind: Service metadata: name: flaskapp spec: ports: - port: 5000 selector: name: flaskapp --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: flaskapp spec: replicas: 1 template: Metadata: Labels: name: FlaskApp spec: Containers: - image: FlaskApp :0.0.2 Name: FlaskApp ports: - containerPort: 5000Copy the code
Understanding this deployment file requires a general understanding of how K8S works. K8s provides restful interfaces through API Servers for cluster interaction. Each deployment object has apiVersion, kind, metadata, and spec keywords.
- Defines objects of the Service and Deployment2 types. Service indicates the external services provided by K8S, and Deployment indicates the Deployment mode of a Service.
- The ports of the Service object describe the Service ports, which are ports on the cluster’s internal network.
- The Selector of the Service object describes how the Service chooses to deploy the pair, using the label name: FlaskApp, which is a decouple dependency.
- The Deployment Replicas describes the number of replicas of the container, which can be extended as shown below.
- Deployment containers describe the image name, service port, and so on.
2. Redis service filek8s/redis.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
selector:
name: redis
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
metadata:
labels:
name: redis
spec:
containers:
- image: redis:4-alpine3.8
name: redis
ports:
- containerPort: 6379
Copy the code
The redis deployment file is similar to the FlaskApp deployment file.
3. Nginx service filek8s/nginx.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
data:
default.conf: |
upstream flaskapp {
server flaskapp:5000;
}
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
location / {
proxy_pass http://flaskapp;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
selector:
name: nginx
type: NodePort --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx spec: replicas: 1 template: Metadata: labels: name: nginx spec: containers: - image: nginx:1.15.8-alpine name: nginx ports: -containerport: 80 volumeMounts: - name: nginx-config-volume mountPath: /etc/nginx/conf.d/default.conf subPath: default.conf volumes: - name: nginx-config-volume configMap: name: nginx-configCopy the code
Nginx deployment files, changes in:
- There is a ConfigMap object, which defines the nginx.conf file, its contents and
nginx\default.conf
Consistent. - A ConfigMap object is mounted to the Nginx container as the nginx configuration file.
Deploy applications to clusters
Use the kubectl apply -f k8s command to submit the written YAML file to the K8S cluster, which will automatically deploy according to the declaration of the YAML file.
service "flaskapp" created
deployment.extensions "flaskapp" created
configmap "nginx-config" created
service "nginx" created
deployment.extensions "nginx" created
service "redis" created
deployment.extensions "redis" created
Copy the code
Kubectl apply-f k8s commits all files in the k8S directory to the K8S cluster. Of course, you can also commit kubectl apply-f k8s/redis.yaml file by file.
To access the application
Kubectl get service kubectl get service kubectl get service
NAME TYPE cluster-ip external-ip PORT(S) AGE flaskApp ClusterIP 10.110.202.47 < None > 5000/TCP 31s Kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 123D nginx NodePort 10.100.233.149 <none> 80/30457 /TCP 31s Redis ClusterIP 10.106.55.214 <none> 6379/TCP 31sCopy the code
Note that the PORTS of the nginx service part are 80:30457/TCP. This means that port 80 of the container is exposed to port 30457 of the local network. This is similar to -p 80:80 of the docker.
Kubectl get Pods kubectl get Pods:
NAME READY STATUS RESTARTS AGE
flaskapp-6c4fccdf99-v6w2v 1/1 Running 0 2m
nginx-85fb469b96-lr982 1/1 Running 0 2m
redis-5b44bb8d97-wwmll 1/1 Running 0 2m
Copy the code
Of course, you can also directly view docker container docker PS:
➜ docker2istio docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES AD7377AE7196 AE70B17240EC"Docker - entrypoint. S..." About an hour ago Up About an hour k8s_redis_redis-5b44bb8d97-wwmll_default_2907f4a3-6639-11e9-b8cb-025000000001_0
c01108b49076 1a61773c4c07 "python flaskapp.py"About an hour ago Up About an hour k8s_flaskapp_flaskapp-6c4fccdf99-xcmwb_default_28fbe1b1-6639-11e9-b8cb-025000000001_0 11d1fa3f182b 315798907716"Nginx - g 'daemon of..."About an hour ago Up About an hour k8s_nginx_nginx-85fb469b96-lr982_default_28fbdeee-6639-11e9-b8cb-025000000001_0 C28032a4b068 k8s. GCR. IO/pause - amd64:3.1"/pause"About an hour ago Up About an hour k8s_POD_redis-5b44bb8d97-wwmll_default_2907f4a3-6639-11e9-b8cb-025000000001_0 7091657 acfbc k8s. GCR. IO/pause - amd64:3.1"/pause"About an hour ago Up About an hour k8s_POD_flaskapp-6c4fccdf99-xcmwb_default_28fbe1b1-6639-11e9-b8cb-025000000001_0 97007670 c247 k8s. GCR. IO/pause - amd64:3.1"/pause" About an hour ago Up About an hour k8s_POD_nginx-85fb469b96-lr982_default_28fbdeee-6639-11e9-b8cb-025000000001_0
...
Copy the code
!!!!!!!!! Note: Pod is not equivalent to docker container, pod is the smallest unit of K8S operation. Simply put, a Pod can contain multiple containers, as you can see from the yamL file containers: keyword. A close look at the output of Docker PS shows that each pod, in addition to its user-defined container, has a system container mirrored as K8s.gcr. IO/pause-AMD64:3.1.
Finally, use curl http://127.0.0.1:30457 to access the service
Hello World by 10.1.0.21 from 192.168.65.3! This page has been accessed once.Copy the code
capacity
In a K8S cluster, capacity expansion is very simple
➜ docker2istio kubectl Edit Deployment /flaskapp deployment.extensions"flaskapp" edited
Copy the code
Change the ** replicas to 3 **.
You can also modify the value in k8s\ FlaskApp. yaml and then kubectl apply-f k8s\ FlaskApp. yaml
In addition, if the image is updated, the flaskapp.yaml file is modified and then applied.
Kubectl get Pods -o wide check the expansion result. -o wide is used to display more information
NAME READY STATUS RESTARTS AGE IP NODE
flaskapp-6c4fccdf99-9xsjl 1/1 Running 0 3m 10.1.0.23 docker-for-desktop
flaskapp-6c4fccdf99-xcmwb 1/1 Running 0 1h 10.1.0.21 docker-for-desktop
flaskapp-6c4fccdf99-zp8mk 1/1 Running 0 3m 10.1.0.24 docker-for-desktop
nginx-85fb469b96-lr982 1/1 Running 0 1h 10.1.0.19 docker-for-desktop
redis-5b44bb8d97-wwmll 1/1 Running 0 1h 10.1.0.22 docker-for-desktop
Copy the code
Multiple access service:
➜ docker2istio curl http://127.0.0.1:30457 Hello World by 10.1.0.21 from 192.168.65.3! This page has been accessed twice. ➜ docker2istio curl http://127.0.0.1:30457 Hello World by 10.1.0.23 from 192.168.65.3! This page has been accessed 3 times. ➜ docker2istio curl http://127.0.0.1:30457 Hello World by 10.1.0.24 from 192.168.65.3! This page has been accessed 4 times. ➜ docker2istio curl http://127.0.0.1:30457Copy the code
Combined with the FLaskApp IP seen earlier, it is clear that requests are automatically loaded to different pods.
Clean up the
Kubectl delete -f k8s:
service "flaskapp" deleted
deployment.extensions "flaskapp" deleted
configmap "nginx-config" deleted
service "nginx" deleted
deployment.extensions "nginx" deleted
service "redis" deleted
deployment.extensions "redis" deleted
Copy the code
The container arrangement
In fact, a K8S cluster will automatically schedule pods to the appropriate nodes in the case of multiple clusters, which is the concept of container choreography. This ability comes in two main forms.
The node label
Our K8S demo cluster node looks like this:
[tyhall51@192-168-10-21 k8s]$kubectl get nodes NAME STATUS ROLES AGE VERSION 192-168-10-14 Ready < None > 13D v1.14.0 192-168-10-18 Ready < None > 130D v1.14.0 192-168-10-21 Ready Master 131D v1.14.0Copy the code
Deploy the sample application to the K8S demo cluster:
[tyhall51@192-168-10-21 docker2istio]$ kubectl apply -f k8s -n docker2istio
service/flaskapp created
deployment.extensions/flaskapp created
configmap/nginx-config created
service/nginx created
deployment.extensions/nginx created
service/redis created
deployment.extensions/redis created
Copy the code
!!!!!!!!! Note to avoid name conflicts with other services, the -n docker2istio parameter is used to create a separate namespace. The namespace can be created using the kubectl create namespace docker2istio command.
View services under namespaces:
[tyhall51@192-168-10-21 docker2istio]$ kubectl get service -n docker2istio NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Flaskapp ClusterIP 10.101.127.107 < None > 5000/TCP 47s nginx NodePort 10.103.147.187 < None > 80/30387 /TCP 46s redis ClusterIP 10.106.162.13 < None > 6379/TCP 46sCopy the code
View pod under the namespace:
[tyhall51@192-168-10-21 docker2istio]$ kubectl get pods -o wide -n docker2istio NAME READY STATUS RESTARTS AGE IP NODE Convention NODE READINESS GATES Flaskapp-589c4CDf86-sftr9 1/1 Running 0 81s 10.244.2.30 192-168-10-14 < None > < None > Nginx-55b87f44ff-b4x88 1/1 Running 0 81s 10.244.2.31 192-168-10-14 <none> <none> redis-7fc7fc64fb-2nzjq 1/1 Running 0 81s 10.244.1.195 192-168-10-18 <none> <none>
Copy the code
Refer to the above, modify replicas to expand FlaskApp:
[tyhall51@192-168-10-21 docker2istio]$ kubectl get pods -o wide -n docker2istio NAME READY STATUS RESTARTS AGE IP NODE Convention NODE READINESS GATES Flaskapp-589c4CDf86-8jzwx 1/1 Running 0 4s 10.244.1.197 192-168-10-18 < None > < None > Convention NODE READINESS GATES Flaskapp-589c4CDf86-8jzwx 1/1 Running 0 4s 10.244.1.197 192-168-10-18 < None > < None > Flaskapp-589c4cdf86-sftr9 1/1 Running 0 3m10s 10.244.2.30 192-168-10-14 <none> <none> Flaskapp-589c4cdf86-tz98x 1/1 Running 0 3m10s 10.244.2.31 <none> <none> nginx-55b87f44FF-b4x88 1/1 Running 0 3m10s 10.244.2.31 192-168-10-14 <none> <none> redis-7fc7fc64fb-2nzjq 1/1 Running 0 3m10s 10.244.1.195 192-168-10-18 <none> <none>
Copy the code
It can be seen here that after capacity expansion, flaskApp’s three PODS will be automatically dispatched to two business nodes 192-168-10-18 and 192-168-10-18.
The disk of node 192-168-10-14 uses high-speed SSD, IO performance is better, we want Redis to be able to schedule to this node.
192-168-10-14: storage= SSD
[tyhall51@192-168-10-21 docker2istio]$ kubectl label nodes 192-168-10-14 storage=ssd
node/192-168-10-14 labeled
Copy the code
Check whether the label is marked properly:
[tyhall51 @ 192-168-10-21 docker2istio] $kubectl get nodes - show - labels | grep SSD 192-168-10-14 Ready < none > 13 d v1.14.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192-168-10-14 ,kubernetes.io/os=linux,storage=ssdCopy the code
Yaml then change k8s/redis.yaml and increase nodeSelector value to storage: SSD.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis spec: replicas: 1 template: metadata: labels: Name: Redis spec: containers: -image: Redis: 4-alPINE3.8 Name: Redis ports: -containerPort: 6379 nodeSelector: storage: ssdCopy the code
Kubectl apply -f k8s/redis.yaml -n docker2istio Docker2istio pod distribution
[tyhall51@192-168-10-21 docker2istio]$ kubectl get pods -o wide -n docker2istio NAME READY STATUS RESTARTS AGE IP NODE Convention NODE READINESS GATES Flaskapp-589c4CDf86-8jzwx 1/1 Running 0 11m 10.244.1.197 192-168-10-18 < None > < None > Convention NODE READINESS GATES Flaskapp-589c4CDf86-8jzwx 1/1 Running 0 11m 10.244.1.197 192-168-10-18 < None > < None > Flaskapp-589c4cdf86-sftr9 1/1 Running 0 14m 10.244.2.30 192-168-10-14 <none> <none> Flaskapp-589c4cdf86-tz98x 1/1 Nginx-55b87f44ff-b4x88 1/1 Running 0 14m 10.244.2.31 <none> <none 192-168-10-14 <none> <none> Redis-66f66896b6-7666t 1/1 Running 0 4s 10.244.2.35 192-168-10-14 <none> <none>Copy the code
It can be seen that redis nodes are rescheduled to nodes 192-168-10-14, showing the affinity of node labels.
Node stain
In the K8S demo cluster 192-168-10-21 are master nodes that do not schedule business pods by default. This capability is implemented with node stains. Cancel 192-168-10-21 Dispatch stain:
kubectl taint node 192-168-10-21 node-role.kubernetes.io/master:NoSchedule-
Copy the code
Then expand the number of copies of FlaskApp to 6 and observe the pod distribution:
[tyhall51@192-168-10-21 docker2istio]$ kubectl get pods -o wide -n docker2istio NAME READY STATUS RESTARTS AGE IP NODE Convention NODE READINESS GATES Flaskapp-589c4CDf86-8jzwx 1/1 Running 0 20m 10.244.1.197 192-168-10-18 < None > < None > Flaskapp-589c4cdf86-92rm5 1/1 Running 0 5s 10.244.2.36 192-168-10-14 <none> < None > Flaskapp-589c4CDf86-bfHS8 1/1 Running 0 5s 10.244.0.26 192-168-10-21 <none> <none> flaskapp-589c4cdf86-sftr9 1/1 Running 0 23m 10.244.2.30 192-168-10-14 <none> <none> flaskapp-589c4cdf86-srv25 1/1 Running 0 5s 10.244.0.25 192-168-10-21 <none> <none> Flaskapp-589c4cdf86-tz98x 1/1 Running 0 20m 10.244.1.196 192-168-10-18 <none> <none> nginx-55b87f44FF-b4x88 1/1 Running 0 20m 10.244.1.196 192-168-10-18 <none> <none> nginx-55b87f44FF-b4x88 1/1 Running 0 23m 10.244.2.31 192-168-10-14 <none> < None > Redis-66f66896b6-7666t 1/1 Running 0 9m30s 10.244.2.35 192-168-10-14 <none> <none>Copy the code
Here you can see that two pods have been tuned to node 192-168-10-21.
Reset stain:
[tyhall51@192-168-10-21 docker2istio]$ kubectl taint node 192-168-10-21 node-role.kubernetes.io/master=:NoSchedule
node/192-168-10-21 tainted
Copy the code
Delete 2 pods on 192-168-10-21:
kubectl delete pod/flaskapp-589c4cdf86-bfhs8 -n docker2istio
kubectl delete pod/flaskapp-589c4cdf86-srv25 -n docker2istio
Copy the code
Observe pod distribution:
[tyhall51@192-168-10-21 docker2istio]$ kubectl get pods -o wide -n docker2istio NAME READY STATUS RESTARTS AGE IP NODE Convention NODE READINESS GATES Flaskapp-589c4CDf86-8jzwx 1/1 Running 0 25m 10.244.1.197 192-168-10-18 < None > < None > Convention NODE READINESS GATES Flaskapp-589c4CDf86-8jzwx 1/1 Running 0 25m 10.244.1.197 192-168-10-18 < None > < None > Flaskapp-589c4cdf86-92rm5 1/1 Running 0 4m40s 10.244.2.36 192-168-10-14 < None > < None > Flaskapp-589c4CDf86-fp5w4 1/1 Running 0 73s 10.244.2.37 192-168-10-14 <none> <none> flaskapp-589c4cdf86-lv2ch 1/1 Running 0 73s 10.244.1.199 192-168-10-18 <none> <none> flaskapp-589c4cdf86-p9kb6 1/1 Running 0 7s 10.244.2.38 192-168-10-14 <none> <none> Flaskapp-589c4cdf86-sftr9 1/1 Running 0 28m 10.244.2.30 192-168-10-14 <none> <none> nginx-55b87f44FF-b4x88 1/1 Running 0 28m 10.244.2.35 192-168-10-14 <none> <none> Redis-66f66896b6-7666t 1/1 Running 0 14m 10.244.2.35 192-168-10-14 <none> <none>Copy the code
You can see that the deleted POD has been rebuilt on the two service nodes 192-168-10-18 and 192-168-10-14.
conclusion
K8s relative to compose:
- The management scale is expanded from a single node to a cluster.
- It’s easier to expand. It’s seamless.
- A more sophisticated deployment strategy allows you to orchestrate containers.
Related components
- Etcd
Etcd is a distributed key-value pair store designed to reliably and quickly store and provide access to critical data. Reliable distributed collaboration through distributed locks, leader elections, and write barriers. Etcd clusters are prepared for high availability, persistent data storage and retrieval. Etcd is used to store cluster information in K8S.
- Efk
EFK (Elasticsearch + Fluentd + Kibana) is an official kubernetes recommended log collection solution
- helm
Helm helps you manage Kubernetes Applications — Helm Charts Help you define, install, and upgrade even the most complex Kubernetes application.
- Rock
File, Block, and Object Storage Services for your Cloud-Native Environments
.
.