LeanCloud started using Docker in production on a large scale a few months after Docker’s official release, and Docker’s technology stack has supported our major back-end architecture for the past few years. This is a Docker and Kubernetes tutorial for programmers. The goal is to give tech-savvy readers a basic understanding of Docker and Kubernetes in as short a time as possible, and to experience the principles and benefits of a container-based production environment by actually deploying, upgrading, and rolling back a service. This article assumes that the readers are developers who are familiar with the Mac/Linux environment, so I won’t cover the basic technical concepts. The command line environment is an example of the Mac, and in Linux you can just adjust it according to the distribution and package management tools you use.
Docker crash
First, a quick introduction to Docker: As an example, we start the Docker daemon locally and run a simple HTTP service in a container. First complete the installation:
$ brew cask install docker
Copy the code
The above command installs Docker for Mac from Homebrew, which contains Docker’s background processes and command-line tools. The Docker background process is installed in /Applications as a Mac App and needs to be started manually. After starting the Docker application, you can check the version of the command line tool in Terminal:
$ docker --version
Docker version 18.03.1-ce, build 9ee9f40
Copy the code
The version of Docker shown above may not be the same as mine, but as long as it’s not too old. Let’s create a separate directory to hold the files we need for our example. To keep the example as simple as possible, the service we’re deploying is using Nginx to serve a simple HTML file, HTML /index.html.
$ mkdir docker-demo
$ cd docker-demo
$ mkdir html
$ echo 'Hello Docker!
' > html/index.html
Copy the code
Next create a new file called Dockerfile in the current directory, containing the following contents:
FROM nginx
COPY html/* /usr/share/nginx/html
Copy the code
Every Dockerfile is started with FROM… At the beginning. FROM nginx means to build our image based on the official image provided by Nginx. At build time, Docker will find and download the required images from Docker Hub. Docker Hub is to Docker images what GitHub is to code, it’s a service that hosts and shares images. Used and built images are cached locally. The second line copies our static file to the /usr/share/nginx/html directory of the image. This is the directory where Nginx looks for static files. Dockerfile contains instructions for building an image. For more information, see here.
Then you can build the image:
$docker build-t docker-demo:0.1.Copy the code
Make sure you follow the steps above to create a new directory for this experiment and run Docker build in that directory. If you run in another directory with a lot of files (such as your user directory or/TMP), Docker will send all the files in the current directory as a context to the background process responsible for building.
The name docker-demo can be interpreted as the application name or service name of the image, with 0.1 being the tag. Docker identifies images by a combination of name and tag. To see the image you just created, use the following command:
$Docker image LS REPOSITORY TAG Image ID CREATED SIZE Docker-demo 0.1 efB8CA048D5A 5 minutes ago 109MBCopy the code
So let’s run this image. Nginx listens on port 80 by default, so we map host port 8080 to container port 80:
$ docker run --name docker-demo -dDocker 8080-80 - p - demo: 0.1Copy the code
To see the running container, use the following command:
$ docker container ps CONTAINER ID IMAGE ... PORTS NAMES C495a7ccf1C7 Docker-Demo :0.1 PORTS NAMES C495a7ccF1C7 Docker-Demo :0.1... 0.0.0.0:8080-80 / TCP docker - > demoCopy the code
At this point, if you go to http://localhost:8080 with your browser, you’ll see the “Hello Docker! Page.
In the real production environment, Docker itself is a relatively low-level container engine. In a cluster with many servers, it is unlikely to manage tasks and resources in the above way. So we need a system like Kubernetes to orchestrate and schedule tasks. Before proceeding to the next step, don’t forget to clear out the laboratory containers:
$ docker container stop docker-demo
$ docker container rm docker-demo
Copy the code
Install Kubernetes
After introducing Docker, it’s time to finally try Kubernetes. We need to install three things: kubctl, a command line client for Kubernetes, Minikube, a Kubernetes environment that can run locally, and Xhyve, a virtualization engine for Minikube.
$ brew install kubectl
$ brew cask install minikube
$ brew install docker-machine-driver-xhyve
Copy the code
Minikube’s default virtualization engine is VirtualBox, and Xhyve is a lighter, better performing alternative. It needs to run as root, so change the owner to root:wheel and turn on the setuid permission:
$ sudo chown root:wheel /usr/local/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
$ sudo chmod u+s /usr/local/opt/docker-machine-driver-xhyve/bin/docker-machine-driver-xhyve
Copy the code
Then you can start Minikube:
$ minikube start --vm-driver xhyve
Copy the code
You’ll probably see a warning that Xhyve will be replaced by HyperKit in future releases, with HyperKit recommended. But when I wrote this tutorial docker-Machine-driver-HyperKit was not yet on Homebrew and required manual compilation and installation, so I took a break and still used Xhyve. In the future, just install and run the command to change Xhyve to Hyperkit.
If you encounter an error or interrupt the first time you start Minikube, and later retries still fail, you can try running Minikube Delete to delete the cluster and try again.
Minikube starts by automatically configuring Kubectl to point to the Kubernetes API service provided by Minikube. You can confirm with the following command:
$ kubectl config current-context
minikube
Copy the code
Introduction to the Kubernetes architecture
A typical Kubernetes cluster consists of a master and many nodes. The Master is the center of the control cluster. The node provides CPU, memory, and storage resources. Multiple processes are running on the Master, including API services for users, Controller Manager for maintaining cluster state, Scheduler for scheduling tasks, and so on. Each node runs kubelet, which maintains node status and communicates with the master, and Kube-Proxy, which implements cluster network services.
As a development and test environment, Minikube will set up a cluster of one node, as seen with the following command:
$kubectl Get Nodes NAME STATUS AGE VERSION Minikube Ready 1h v1.10.0Copy the code
Deploy a single instance service
Let’s start by trying to deploy a simple service as we did at the beginning of this article when we introduced Docker. The smallest unit deployed in Kubernetes is a POD, not a Docker container. In real time, Kubernetes is not dependent on Docker, and other container engines can be used to replace Docker in the cluster managed by Kubernetes. When used in conjunction with Docker, a POD can contain one or more Docker containers. But except in cases of tight coupling, there is usually only one container in a POD, which makes it easy for different services to scale independently.
Minikube comes with the Docker engine, so we need to reconfigure the client so that the Docker command line can communicate with the Docker process in Minikube:
$ eval $(minikube docker-env)
Copy the code
After running the above command, you can only see some Minikube images when you run docker Image LS, so you can’t see the Docker-demo :0.1 image we just built. So before we go any further, let’s rebuild our image again, by the way, and call it K8s-demo :0.1.
$docker build-t k8s-demo:0.1.Copy the code
Then create a definition file called pod.yml:
ApiVersion: V1 kind: Pod metadata: Name: k8s-demo spec: containers: -name: k8s-demo image: k8s-demo:0.1 ports: - containerPort: 80Copy the code
Here we define a Pod called k8s-demo, using the K8s-demo :0.1 image we just built. This file also tells the processes in the Kubernetes container to listen on port 80. Then run it:
$ kubectl create -f pod.yml
pod "k8s-demo" created
Copy the code
Kubectl submits the file to the Kubernetes API service, and the Kubernetes Master assigns the Pod to the node as required. To see this new Pod, use the following command:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
k8s-demo 1/1 Running 0 5s
Copy the code
Because our mirror is local and the service is simple, the STATUS is running when we run Kubectl Get Pods. If you’re using a remote image (such as one on Docker Hub), you might not see the status Running, so you’ll have to wait.
Although the Pod is running, we can’t access the services it is running in the browser as we did when we tested Docker. It can be understood that the POD runs on an Intranet and we cannot directly access it from the outside. To expose the Service, we need to create a Service. The Service acts a bit like a reverse proxy and load balancer that distributes requests to subsequent PODS.
Create a Service definition file svc.yml:
apiVersion: v1
kind: Service
metadata:
name: k8s-demo-svc
labels:
app: k8s-demo
spec:
type: NodePort
ports:
- port: 80
nodePort: 30050
selector:
app: k8s-demo
Copy the code
This service exposes port 80 of the container from port 30050 of the node. Note the selector section in the last two lines of the file, which determines which pods in the cluster the request will be sent to. The definition here is all pods that contain the tag “app: k8s-demo”. However, the pod we deployed previously did not have a tag set:
$ kubectl describe pods | grep Labels
Labels: <none>
Copy the code
So update pod.yml to add the labels (note the labels added under metadata:) :
apiVersion: v1
kind: Pod
metadata:
name: k8s-demo
labels:
app: k8s-demo
spec:
containers:
- name: k8s-demo
image: k8s-demo:0.1
ports:
- containerPort: 80
Copy the code
Then update the POD and confirm that the tag was successfully added:
$ kubectl apply -f pod.yml
pod "k8s-demo" configured
$ kubectl describe pods | grep Labels
Labels: app=k8s-demo
Copy the code
Then we can create the service:
$ kubectl create -f svc.yml
service "k8s-demo-svc" created
Copy the code
Use the following command to get the exposed URL, and visit it in a browser to see the page we created earlier.
$minikube service k8s - demo - SVC - the url http://192.168.64.4:30050Copy the code
Scale out, roll update, version rollback
In this section, let’s experiment with some of the operations that are commonly used in a production environment with highly available services. Before proceeding, remove the pod you just deployed (but keep the service, which will be used below) :
$ kubectl delete pod k8s-demo
pod "k8s-demo" deleted
Copy the code
In a formal environment where we need to keep a service immune to the failure of a single node and dynamically adjust the number of nodes based on load changes, it is not possible to manage pod on a case-by-case basis as above. Kubernetes users usually use Deployment to manage services. A Deployment can create a specified number of Pods to be deployed on each node and can perform updates, rollback, and so on.
First we create a definition file deployment.yml:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: k8s-demo-deployment spec: replicas: 10 template: Metadata: labels: app: k8s-demo spec: containers: - name: k8s-demo-pod image: k8s-demo:0.1 ports: -containerPort: 80Copy the code
Note that the initial apiVersion is not the same because the Deployment API is not included in v1, replicas: 10 specifies that this Deployment must have 10 Pods, and the rest is similar to the previous POD definition. Submit this file to create a Deployment:
$ kubectl create -f deployment.yml
deployment "k8s-demo-deployment" created
Copy the code
Using the following command you can see the Deployment replica set with 10 Pods running.
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
k8s-demo-deployment-774878f86f 10 10 10 19s
Copy the code
Let’s say we’ve made some changes to the project and are releasing a new version. As an example, let’s just change the contents of the HTML file and build a new version of the image k8s-demo:0.2:
$ echo 'Hello Kubernetes!
'> $docker build-t k8s-demo:0.2.Copy the code
Then update Deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-demo-deployment
spec:
replicas: 10
minReadySeconds: 10
strategy:
type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 template: metadata: labels: app: k8s-demo spec: Containers: -name: k8s-demo-pod image: k8s-demo:0.2 ports: -containerPort: 80Copy the code
There are two changes. The first is to update the image version number image: k8s-demo:0.2, and the second is to add minReadySeconds: 10 and strategy. The new section defines the update policy: minReadySeconds: 10 specifies that after a POD is updated, the next pod needs to be updated 10 seconds after it enters the normal state. MaxUnavailable: 1 indicates that no more than one POD can be unavailable at a time. MaxSurge: 1 indicates that no more than one pod is redundant. Then Kubernetes will replace the pods after the service one by one. Run the following command to start the update:
$ kubectl apply -f deployment.yml --record=true
deployment "k8s-demo-deployment" configured
Copy the code
–record=true here causes Kubernetes to record this command in the publication history for future reference. You can immediately run the following command to check the status of each pod:
$ kubectl get pods
NAME READY STATUS ... AGE
k8s-demo-deployment-774878f86f-5wnf4 1/1 Running ... 7m
k8s-demo-deployment-774878f86f-6kgjp 0/1 Terminating ... 7m
k8s-demo-deployment-774878f86f-8wpd8 1/1 Running ... 7m
k8s-demo-deployment-774878f86f-hpmc5 1/1 Running ... 7m
k8s-demo-deployment-774878f86f-rd5xw 1/1 Running ... 7m
k8s-demo-deployment-774878f86f-wsztw 1/1 Running ... 7m
k8s-demo-deployment-86dbd79ff6-7xcxg 1/1 Running ... 14s
k8s-demo-deployment-86dbd79ff6-bmvd7 1/1 Running ... 1s
k8s-demo-deployment-86dbd79ff6-hsjx5 1/1 Running ... 26s
k8s-demo-deployment-86dbd79ff6-mkn27 1/1 Running ... 14s
k8s-demo-deployment-86dbd79ff6-pkmlt 1/1 Running ... 1s
k8s-demo-deployment-86dbd79ff6-thh66 1/1 Running ... 26s
Copy the code
You can see from the AGE column that some pods are new and some are old. The following command displays the real-time status of the publication:
$ kubectl rollout status deployment k8s-demo-deployment
Waiting for rollout to finish: 1 old replicas are pending termination...
Waiting for rollout to finish: 1 old replicas are pending termination...
deployment "k8s-demo-deployment" successfully rolled out
Copy the code
Since I entered it late, the release was almost finished, so there were only three lines of output. The following command allows you to view the publication history, since the second publication used –record=true so you can see the command used for the publication.
$ kubectl rollout history deployment k8s-demo-deployment
deployments "k8s-demo-deployment"
REVISION CHANGE-CAUSE
1 <none>
2 kubectl apply --filename=deploy.yml --record=true
Copy the code
If you refresh your browser, you’ll see the update “Hello Kubernetes!” . If, after a new version is released, we find a serious bug and need to immediately roll back to the previous version, we can use this simple operation:
$ kubectl rollout undo deployment k8s-demo-deployment --to-revision=1
deployment "k8s-demo-deployment" rolled back
Copy the code
Kubernetes will replace each pod in the same way as a new release, only this time it will replace the new version with an older version:
$ kubectl rollout status deployment k8s-demo-deployment
Waiting for rollout to finish: 4 out of 10 new replicas have been updated...
Waiting for rollout to finish: 6 out of 10 new replicas have been updated...
Waiting for rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for rollout to finish: 1 old replicas are pending termination...
deployment "k8s-demo-deployment" successfully rolled out
Copy the code
After the rollback is complete, refresh the browser to confirm that the page has changed back to “Hello Docker!” .
conclusion
We went through the image build and container deployment at different levels, and deployed a deployment with 10 containers, and experimented with the rolling update and rollback process. Kubernetes offers so many features that this article is just a quick walkthrough, skipping over many of the details. While you can’t add “proficient in Kubernetes” to your resume just yet, you should be able to test your front and back end projects in your local Kubernetes environment, turning to Google and official documentation for specific questions. With this knowledge you should be able to publish your own services in the Kubernetes production environment provided by others.
Most of LeanCloud’s services run on a Docker-based infrastructure, including API services, middleware, back-end tasks, and more. Most of the developers using LeanCloud work on the front end, but the cloud engine is the one that brings the container technology closest to the user in our product. The cloud engine offers the advantages of container isolation and easy capacity expansion, while directly supporting native dependency management in each language, which eliminates the burden of image building, monitoring, and recovery, making it ideal for users who want to focus entirely on development.
LeanCloud is recruiting for the following positions:
Marketing Team Leader
Back-end Software Engineer (Clojure, Python, Java)
Android Software Engineer
Please see our job Opportunities page for specific requirements and other positions being advertised. In addition to the products you can see on our website, we are also working on exciting new products, and there is a lot of meaningful and valuable work.
If you reprint this article, please include the original link and recruitment information.