After reading this article, I hope readers can understand the basic principles of Docker, how Kubernetes works, and the advantages and gameplay of the front-end Kubernetes.

The biggest difference between Docker and traditional deployment is that it doesn’t restrict us from using any tool, any language, or any version of Runtime. Docker sees our application as a web-only box (or container), while Kubernetes does more automation of these boxes. Automatic creation, automatic restart, automatic expansion, automatic scheduling, this process is called container orchestration.

Today, container choreography technology brings great flexibility to Web applications, allowing us to easily create the programs we need to provide services to the outside world. Compared with traditional IaaS, we do not need to care about cloud host application, cloud host configuration and other information, nor do we need to consider the service unavailable due to cloud host failure. Instead, the replica controller of Kubernetes helps us complete container migration after cloud host failure.

In this post, we’ll take a look at some of the things that have gone from Docker to Kubernetes. Finally, we’ll take a look at what Kubernetes has to offer in the front end.

Docker installation

  • For Linux Debian/Ubuntu, install community DockerCE

  • Windows one-click installation

If it is Windows10, Windows7 will use VirtualBox to install Linux as Docker’s host. Windows10 Pro will use hyper-v to install Linux as the host for Docker.

  • MacOS one-click installation

Docker basic information

The default Docker location is /var/lib/docker, where all images, containers, and volumes are stored. If you use multiple disks or mount SSDS that are not on /, you need to change the default graph (graph) to the appropriate location. The configuration file is /etc/docker/daemon.json, for example

{
  "bip": "192.168.0.1/16"."graph": "/mnt/ssd/0/docker"
}
Copy the code

Docker will automatically create the Docker0 network card during the installation process and assign IP to it. The biP specified above specifies the IP address of the docker0 network card. If it is not specified, an appropriate IP address will be selected automatically according to the host IP address when creating Docker0. However, due to the complexity of the network, especially in the equipment room network, it is easy to find address selection conflicts. This is where you need to manually specify BIP to an appropriate value. Docker’s IP selection rules are well analyzed in this article. Please refer to blog.csdn.net/longxing_12… .

After installation and startup, you can view some configuration information of Docker through Docker Info.

Docker hello world

Docker’s first test command to check if the installation is working is simple.

docker run hello-world
Copy the code

First, he will download the image of Hello-World from Docker Hub, and then run the image locally. After starting the Docker service, it is called the container. When the container is created, it executes the specified entry program. The program execution outputs some information into the stream and exits. The container ends when the entry program ends.

  • View all containers
docker ps -a
Copy the code

The output is as follows:

cf9a6bc212f9        hello-world                     "/hello"                 28 hours ago        Exited (0) 3 min
Copy the code

The first column is the container ID, which is required for many container-specific operations, such as the following common operations.

docker rm container_id
docker stop container_id
docker start container_id
docker describe container_id
Copy the code

Docker start container_id: docker start container_id: Docker start To automatically delete the container after it exits, you can specify the –rm parameter when docker run.

When we run this command Docker will download the hello-world image and cache it locally so that the next time we run the command we don’t need to download it from the source.

  • Viewing a Local Mirror
docker images
Copy the code

Run Nginx

Nginx, as a widely used Web server, is also popular in the Docker world. It is often used to start a network service to verify the network configuration. Use the following command to start the Nginx container Docker run –rm -p 80:80 Nginx.

Access localhost:80 to see the Nginx service started, the console can see the log output of the Nginx service.

Because the network in Docker is isolated from the outside world, we need to manually specify port forwarding -p 80:80 to explicitly forward the host 80(front) to the container’s port 80. Exposing the port is one of the most commonly used ways for us to provide services. There are also other types of services, such as log processing and data collection that require shared data volumes to be served, all of which need to be explicitly specified when we start the container.

Some common startup parameters:

  • -pLocal port: Container port maps local ports to containers
  • -PMap container ports to local random ports
  • -vLocal path or volume name: Container path Indicates the location where the local path or data volume is mounted to the container
  • -itStarted as an interactive command
  • -dKeep the container running in the background
  • --rmResources are cleared after the container exits

How does Docker work

The underlying core principle of Docker is to make use of the namespace and cgroup features of the Linux kernel. Namespace is used for resource isolation, and Cgroup is used for resource quota. There are six namespaces in the Linux kernel. They correspond to the following.

Namespace System call function Isolate the content
UTS CLONE_NEWUTS Host and Domain name
IPC CLONE_NEWIPC Semaphore, message queue, and shared memory
PID CLONE_NEWPID Process number
Network CLONE_NEWNET Network device, network stack, port, etc
Mount CLONE_NEWNS Mount point (file system)
User CLONE_NEWUSER Users and user groups

There are three namespace related functions in the system call:

  1. Clone man7.org/linux/man-p…

If I want the child to have a separate network address, TCP/IP stack, I can specify it as follows.

Clone (cb, *stack, CLONE_NEWNET.0)
Copy the code
  1. Unshare man7.org/linux/man-p…

Moving the current process to a new namespace, for example, processes created using fork or vfork will share the parent resource by default, and using unshare will unshare the child process from the parent.

  1. Setns man7.org/linux/man-p…

Assigns a namespace to the specified PID, usually used to share a namespace.

Linux supports isolation of namespaces in system calls at the kernel layer. By assigning a single namespace to a process, it can be isolated from each resource dimension. Each process can obtain its own host name, IPC, PID, IP, root file system, user group, and so on. It is like a private system, but the kernel is shared even though the resources are isolated, which is one of the reasons why it is lighter than a traditional virtual machine.

In addition, resource isolation is not enough. In order to ensure true fault isolation and mutual influence, CPU, memory, GPU, etc., also need to be limited, because if one program appears infinite loop or memory leak, other programs will also fail to run. Resources quota is accomplished using kernel cgroup characteristics, want to know the details of the students can refer to: www.cnblogs.com/sammyliu/p/… . (It is also highly recommended to run the kernel container above Linux 4.9, Linux 3.x has known kernel instability causing host restart issues)

Docker network

In order for a container to provide services, it needs to expose its network. Docker is isolated from the environment on the host. In order to expose the service, it is necessary to show Docker which ports allow external access. When running Docker run-p 80:80 nginx, here is to expose port 80 inside the container to port 80 on the host. The specific port forwarding will be analyzed in detail below. The network part of the container is the most important part in the container, and also the cornerstone of building a large cluster. When we deploy the application of Docker, we need to have a basic understanding of the network.

Docker provides four network modes, which are Host, Container, None, and Bridge specified using –net

Host mode:

docker run --net host nginx
Copy the code

In Host mode, the Host NIC is directly used in the container. In this case, the IP address obtained in the container is the IP address of the Host. Port binding is directly bound to the Host NIC.

The Container mode:

docker run --net container:xxx_containerid nginx
Copy the code

The container and the specified container share the network namespace, network configuration, IP address, and port. Containers whose network mode is Host cannot be shared.

None mode:

docker run --net none busybox ifconfig
Copy the code

A container that specifies the mode None will assign no network adapter devices, only the internal LO network.

Bridge pattern

docekr run --net bridge busybox ifconfig
Copy the code

This mode is the default mode. When the container is started, a separate network namespace will be allocated. Meanwhile, when Docker is installed/initialized, a network bridge named docker0 will be created on the host, which will also serve as the default gateway of the container. The container network assigns IP addresses within the gateway segment.

When I execute docker run -p 3000:80 nginx, Docker will create the following iptable forwarding rule on the host.

The bottom rule shows that when an external host nic requests port 3000, destination ADDRESS translation (DNAT) is performed on the host NIC. The destination address is changed to 172.18.0.2, and the port is changed to 80. After the destination address is changed, the traffic will be forwarded from the local default NIC to the corresponding container through Docker0. In this way, when the external requests port 3000 of the host, the internal will forward traffic to the internal container service, thereby realizing the service exposure.

Similarly, Docker internal access to external interfaces will also perform source address translation (SNAT). The container internal request google.com, and the server will receive the IP address of the host network card.

The Bridge mode is less efficient than the Host mode because of the additional layer of NAT. However, the Bridge mode isolates the external network environment and allows containers to have exclusive IP addresses and complete port space.

The above four network modes are several working modes provided by Docker. However, the deployment of Kubernetes requires that all containers work in a LAN, so the cluster deployment requires the support of multi-host network plug-ins.

Flannel

Multi-host network solutions include CNI specification introduced by CNCF and CNM solution brought by Docker. However, CNI specification is the most commonly used one at present, and Flannel is one of the implementations.

Flannel uses the packet nesting technology to solve the problem of communication between multiple hosts. The original packets are sent into packets, the IP address of the destination host is specified, and the packets are unpacked and sent to the corresponding container after reaching the host. The following figure shows that flannel uses the UDP protocol with higher efficiency to transmit packets between hosts.

At present, there are three commonly used mainstream cross-host communication, each with its advantages and disadvantages, depending on the scenario:

  • overlay, that is, the above packets are nested.
  • hostgwBy modifying the host routing table, packets are forwarded without unpacking or packet forwarding, which is more efficient. However, it also has more restrictions and is only suitable for hosts in the same LAN.
  • Implemented using softwareBGPIn this way, routing rules are broadcast to routers in the network. Like HostGW, there is no need to unpack, but the implementation cost is high.

With CNI, you can build a Kubernetes cluster on top of this.

Kubernetes is introduced

In small-scale scenarios using Docker can deploy application is very convenient, a key to achieve the purpose of a key deployment, but when there is a need in hundreds of hosts many deployment of a copy of the need to manage so many host running status as well as the service failure need in other host to restart the service, imagine that the manual way is not a desirable solution, This is where higher-dimensional orchestration tools like Kubernetes come into play. Kubernetes refers to K8S for short. Simply put, K8S abstracts hardware resources and abstracts N physical machines or cloud hosts into a resource pool. The scheduling of containers is handed over to K8S, which takes care of our containers like a mother. If the memory does not meet the requirements, we will find a machine with enough memory to create a corresponding container on it. If the service is suspended for some reason, K8S will help us automatically migrate and restart it. We as developers only care about our own code, and the health of our application is guaranteed by K8S.

The specific installation method is not introduced here. If you use Windows or MacOS, you can directly use the Kubernetes option under Docker Desktop to install the single-host cluster with one click, or you can use the Kind tool to simulate the multi-cluster K8S locally.

The basic unit of K8S scheduling is POD. A POD represents one or more containers. It’s a quote from a book

The reason why the container is not used as the dispatching unit is that a single container does not constitute the concept of a service. For example, a Web application that does a back-end split requires a NodeJS and Tomcat to form a complete service, so you need to deploy two containers to implement a complete service, although you can also put them both in one container. But this is a clear violation of the idea that one container is one process. Service Mesh: Implementing the Service Mesh with Istio Soft Load

K8S differs from traditional IaaS systems:

IaaS is Infrastructure as a service. To launch a new application, developers need to apply for host, IP, domain name and a series of resources, and then log in to the host to build the required environment and deploy the application online. This is not good for large-scale operation. It also increases the possibility of error, operation and maintenance or development this often write their own script automation, encountered some differences and then manually modify the script, very painful.

K8S is the infrastructure can be programmed, from the original manual application to a manifest file automatically created, developers only need to submit a document, K8S will automatically allocate resources for you to create. CRUD for these facilities can be programmatically automated.

To understand the basic concepts of K8S, let’s deploy a Node SSR application:

Example Initialize an application template

npm install create-next-app
npx create-next-app next-app
cd next-app
Copy the code

After creating the project, add a Dockerfile image to build the service

Dockerfile

FROM node:8.16.1-slim as build

COPY ./ /app

WORKDIR /app
RUN npm install
RUN npm run build
RUN rm -rf .git


FROM node:8.16.1-slim

COPY --from=build /app /

EXPOSE 3000
WORKDIR /app

CMD ["npm"."start"]
Copy the code

This Dockerfile is optimized in two parts

  1. The lite version of node basic mirroring is used to greatly reduce the mirror size
  2. Using a step-by-step approach, the image size is reduced by reducing the number of mirror layers and removing temporary files.

Build the mirror

docker build  . --tag next-app
Copy the code

Then we can request our application to Kubernetes. To ensure high availability, at least two copies of the service are created, and we also need an application domain name that is automatically forwarded to our service when requested on our cluster. So our corresponding configuration file can be written like this

Deployment.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app-ingress
spec:
  rules:
  - host: next-app-server
    http:
      paths:
      - backend:
          serviceName: app-service
          servicePort: 80

---
kind: Service
apiVersion: v1
metadata:
  name: app-service
spec:
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 3000

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - image: next-app
        name: next-app
          imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
Copy the code

The above list tells K8S:

  • First you need a Deployment controller, mirrored asnext-app, service port 3000, create two copies for me.
  • You also need to create a Service that points to the ones created by the replica controllernext-app.
  • Apply for an Ingress portal with the domain namenext-app-server, which points to the Service just now.

Submit this application to K8S.

kubectl apply -f ./Deployment.yaml
Copy the code

You can then see the pod deployed.

Sh-4.4 $kubectl get Pod NAME READY STATUS RESTARTS AGE app-deployment-594c48DBDB-4f4CG 1/1 Running for 0 1m RESTARTS app-deployment-594c48dbdb-snj54 1/1 Running 0 1mCopy the code

Then open the domain name configured in the Ingress in your browser to access the corresponding application (provided that the domain name can reach your K8S cluster node).

The list above focuses on creating the three most common resources to keep a service running, and these are the three main Kubernetes resources.

  • Ingress

    L7 layer load balancing configuration. It can point to different services based on different domain names or paths. Ingress is similar to Nginx, in fact, one implementation of Ingress is Nginx, so you can use Ingress as Nginx. However, we do not need to manually modify nginx.conf, nor do we need to manually restart the nginx service.

  • Service

    An abstraction of a set of Pods used to select pods that provide the same service. Because pods are unstable, destruct reconstructs occur frequently, and the IP of a pod changes frequently, an abstract resource, Service, is needed to represent the location of a POD. Service is also an internal Service discovery mechanism of K8S. The Service name is automatically written to the internal DNS record.

  • Deployment

    Replica controller, a mechanism for managing and maintaining POD. Deployment allows you to specify the number of replicas, publish policies, log releases, and support rollback.

Application publishing system

K8S is only responsible for container choreography. In fact, if the application is deployed, external Pipeline support is required. Code building, static checking, and image packaging are done by Pipeline.

At present, the most commonly used domestic publishing systems are often composed of the following services: GitLab/GitHub, Jenkins, Sonar, Harbor.

K8S in front of the advantage

  1. First of all, unlike Java, a small NodeJS service consumes only about 40MB of memory, which means that if we have many NodeJS applications, using K8S will save a lot of hardware resources.

  1. Use container thinking for non-intrusive logging, performance indicator collection.

Since the container is a process, monitoring the container can be viewed as monitoring our NodeJS process. There are many mature container monitoring schemes in the K8S ecosystem, such as Prometheus + Grafana. Non-invasive collection of performance indicators that can be used to achieve applications include: Network IO/Disk IO/CPU/MEM.

Similarly for log collection, we can use the console output directly in the code and log collection service in the container dimension. It is also non-intrusive, code layer unaware, more developer friendly, and decouple log and service.

  1. Front-end microservices architecture infrastructure layer.

Microservice architecture is an increasingly popular way of organizing front-end architectures in recent years. Microservice architecture needs a more flexible and flexible way of deployment. Using Docker allows us to abstract the smallest unit of services in a complex architecture, and K8S makes it possible to automatically maintain large-scale clusters. It is fair to say that the microservices architecture is a natural fit for K8S.

K8S new gameplay, traffic distribution

K8S uses services to abstract a set of Pods, and Service selectors can change dynamically, so we have a lot of possible gameplay, such as the blue and green publishing system.

Blue-green release means that after a new application passes the release test during the release process, it can upgrade the application with one click by switching the gateway traffic, and realize the one-click switch between different versions by dynamically updating the Service selector in K8S

Let’s use the Next. Js application above to demonstrate the blue and green publishing, warehouse address

git clone https://github.com/Qquanwei/test-ab-deploy
cd test-ab-deploy
docker build . --tag next-app:stable
kubectl apply -f ./Deployment.yaml
Copy the code

The next-app:stable image is deployed to the cluster and the POD is tagged with version: stable.

After deployment, the following information is displayed.

Next, we deploy the test branch, which we will build as a mirror of next-app:test, and we will deploy the pod labeled version: test.

git checkout test
docker build . --tag next-app:test
kubectl apply -f ./Deployment.yaml
Copy the code

At this point we had two versions of the application deployed and both were ready to go.

However, since our Service is version=stable, all requests will not be sent to the test version, and will still be sent to the stable Service.

When we have verified that the test version of the Service is available in some other way, such as with another Service for test (Good), we can switch the current Service to the test application with the following instruction.

kubectl apply -f ./switch-to-test.yaml
Copy the code

After executing this command, refresh the page to see the following.

It is easy to implement the blue-green publishing function by switching the Service mode, and it can be completed instantly, because the Service is a relatively light resource in K8S, and it will not affect the whole online Service by restarting the Service after changing the configuration as Nginx does next door. Of course, the actual production environment is more rigorous than the demo, and there may be a dedicated platform and reviewers who perform secondary validation of every operation.

For blue-green, grayscale publishing, K8S can be implemented relatively easily, allowing us to have more ways to test ideas. However, if you want to implement more advanced traffic allocation schemes (such as A/B publishing) that require complex traffic management policies (authentication, authentication), you need to use the service grid.

Istio is a popular service grid framework. Compared with K8S, WHICH focuses on the management of running containers, Istio pays more attention to the traffic transmission of service grids between containers.

Below is the topology and some data metrics of the service in the official example BookInfo microservice captured by Istio.

There are two obvious benefits to using Istio:

  1. Istio can capture invocation links between services without compromising user code.
  2. Istio can manage each connection separately.

For example, we can easily dynamically weight the v1, V2, and V3 versions of different versions of review.

Not only can the traffic weight be allocated, but also some A/B schemes can be formulated, such as requesting different versions of applications based on whether the URL matches or not, or distinguishing users based on the cookies planted in the Header, so as to request different applications. Of course, there are lots of interesting gameplay options for Istio depending on the industry scenario.

However, Istio is also a complex system, which affects the system performance and consumes a large amount of system resources.

conclusion

K8S is epoch-making. With the development of micro-service in the future, cloud biochemistry will be the main form of our applications. For the front-end, K8S will undoubtedly change the development mode and front-end architecture of the existing front-end, so that the front-end can expand more rapidly, deliver more steadily, and the connection between applications will become closer. The long dormant front end in the next three years is believed to be the world of microservice architecture, K8S as a microservice architecture infrastructure layer will be more and more corporate teams will pay attention to.

The resources

  • Docker Containers and The Container Cloud
  • Kubernetes in Action
  • Service Mesh: Implementing Service Mesh with IsTIO Soft Load
  • Ali Cloud install Docker: blog.csdn.net/longxing_12…

This article is published by netease Cloud Music front end team. Any unauthorized reprint of the article is prohibited. We’re always looking for people, so if you’re ready to change jobs and you love cloud music, then join us!