K8s combat concept, cluster deployment and service configuration
This article is a refinement of the Kubernetes combat series articles.
Kubernetes [Koo -ber- Nay ‘-tice] is Google’s container scheduling engine based on Borg. It supports a variety of underlying container virtualization technologies, has complete functions for supporting distributed systems and microservice architectures, and has strong horizontal capacity expansion. It automates container deployment and replication, expands and contracts containers at any time, organizes containers into groups, and provides load balancing between containers, providing container elasticity, and other features. As one of the most important components of Cloud Native Computing Foundation (CNCF), it can be called Cloud operating system. Its goal is not just to be a choreography system, but to provide a specification that allows you to describe the architecture of the cluster and define the final state of services.
Design concept
Compared with the general PaaS platform, K8s also supports service deployment, automatic operation and maintenance, resource scheduling, capacity expansion, self-repair, load balancing, service discovery and other functions, and its unique feature is that it has a good capability abstraction for the infrastructure layer. K8s did not deal with the specific storage, network, these very different parts, but do not cloud, began to realize all kinds of interface, do all kinds of abstraction. Examples include container runtime Interface (CRI), Container network Interface (CNI), and container Storage Interface (CSI). These interfaces make Kubernetes extremely open, while it itself can focus on internal deployment and container scheduling.
Kubernetes has a layered architecture similar to Linux, as shown below:
-
Infrastructure layer: includes container runtime, network, storage, and so on.
-
Core layer: the core function of Kubernetes, external API to build high-level applications, internal plug-in application execution environment.
-
Application layer: deployment (stateless, stateful applications, jobs, etc.) and routing (service discovery, load balancing, etc.)
-
Management: System metrics (such as infrastructure, container, and network metrics), automation (such as automatic scaling, dynamic Provision, etc.), and policy management (RBAC, Quota, PSP, NetworkPolicy, etc.)
-
Interface layer: Kubectl command line tools, client SDK, and cluster federation
-
Ecosystem: An ecosystem managed and scheduled by a large container cluster above the interface layer, which can be divided into two categories: External ecosystems such as logs, monitoring, configuration management, CI, CD, Workflow, FaaS, OTS applications, and ChatOps and internal ecosystems such as CRI, CNI, CSI, image repository, Cloud Provider, and cluster configuration and management.
All configuration in Kubernetes is set by the spec of the API object, that is, users can change the system by configuring the ideal state of the system. This is one of the important design concepts of Kubernetes. That is, all operations are Declarative rather than Imperative. The advantage of declarative operation in distributed system is that it is stable, and there is no fear of losing operation or running multiple times. For example, setting the number of copies to 3 and running multiple times is still a result, while adding 1 to the number of copies is not declarative, and the result of running multiple times is wrong.
Declarative operations are more stable and acceptable to users than imperative operations because there are implicit targets in the API that the user wants to operate on, which happen to be nouns, such as Service, Deployment, PV, etc. Declarative configuration files are closer to “human languages”, such as YAML and JSON. The declarative design concept helps to realize the control closed-loop, continuous observation, correction, and ultimately the operation state to reach the desired state of the user; Perceiving user behavior and executing it. For example, modify Pod count, apply upgrade/rollback, and so on. The scheduler is the core, but it is only responsible for selecting the right Node from the cluster to run the Pods. Obviously, the scheduler is not suitable for the appeal function and requires a dedicated controller component.
Components and Objects
Kubernetes’ various functions are dependent on the resource objects it defines, which can be submitted to the cluster’s Etcd through the API. The API is defined and implemented in accordance with the HTTP REST format. Users can add, DELETE, modify and query related resource objects by using standard HTTP verbs (POST, PUT, GET, and DELETE). Common resource objects, such as Deployment, DaemonSet, Job, PV, etc. The API abstraction is also intended to define this part of the resource object. Kubernetes has new functionality implementations, typically creating new resource objects on which functionality is implemented.
category | The name of the |
---|---|
Resource objects | Pod, ReplicaSet, ReplicationController, Deployment, StatefulSet, DaemonSet, Job, CronJob, HorizontalPodAutoscaling, Node, Namespac E, Service, Ingress, Label, CustomResourceDefinition |
Store the object | Volume, PersistentVolume, Secret, ConfigMap |
Policy object | SecurityContext, ResourceQuota, LimitRange |
The identity object | ServiceAccount, Role, and ClusterRole |
Here we select a few key objects to introduce.
Deployment
Deployment represents an update operation by the user to the Kubernetes cluster. Deployment is a broader API object than the RS application pattern, and can be the creation of a new service, updating a new service, or rolling upgrading a service. The rolling upgrade of a service is actually a compound operation of creating a new RS and gradually increasing the number of copies in the new RS to the ideal state and reducing the number of copies in the old RS to 0. Such a composite operation is not well described in an RS, so a more general Deployment is used to describe it. In the development direction of Kubernetes, all long-term servo-type business management will be managed through Deployment in the future.
Service
RC, RS, and Deployment only guarantee the number of microservice pods supporting services, but do not solve the problem of how to access these services. If Deployment is responsible for keeping the Pod group running, Service is responsible for making sure that the Pod group is connected to a reasonable network.
A Pod is just an instance of a running service that can stop at any time on one node and start a new Pod on another node with a new IP, so it cannot be served with a defined IP and port number. Stable service delivery requires service discovery and load balancing capabilities. The task of service discovery is to find the corresponding back-end service instance for the service accessed by the client. In a K8 cluster, the Service that clients need to access is the Service object. Each Service corresponds to a valid virtual IP address in the cluster. The cluster accesses a Service through this virtual IP address. There are three types of services:
-
ClusterIP: The default type that automatically assigns a virtual IP address accessible only within the Cluster.
-
NodePort: Binds a port on each machine to the Service on a ClusterIP basis so that the Service can be accessed through
:NodePort.
-
LoadBalancer: create an external LoadBalancer using the Cloud Provider based on the NodePort and forward the request to
:NodePort.
The load balancing of microservices in Kubernetes cluster is implemented by Kube-proxy. Kube-proxy is a load balancer within Kubernetes cluster. It is a distributed proxy server with one on each node of Kubernetes; This design reflects its scalability advantages, as the more nodes that need to access the service, the more Kube-Proxies that provide load-balancing capability, and the more highly available nodes. In contrast, we usually do a reverse proxy to do load balancing on the server side, and further solve the load balancing and high availability problems of reverse proxy.
Cluster deployment
In Kubernetes series, we introduced the methods of Docker local construction, Ubuntu manual cluster construction and Rancher rapid cluster construction. Using Rancher can automatically and visually complete the installation of Kubernetes cluster, eliminating the tedious manual installation process, but you quickly put into the business development.
$ docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
Copy the code
Install Rancher Server, Control, etcd and worker on the Master node. Select Flannel as the network component, select the role of the host, and enter the internal and external IP addresses of the host in the customize host running command.
We need to copy the script to run on the corresponding machine, and Rancher will automatically create the Kubernetes cluster and run the Web Server on port 80 by default. To add nodes, you only need to find the cluster you just installed on Rancher’s Web interface, select “Edit Cluster” and select Node role as Worker to add a Kubenretes cluster Node.
Helm
Helm is an open source tool created by Deis to help simplify deployment and management of Kubernetes applications. In this chapter, we will also use Helm to simplify the installation of many applications.
You can install Heml using Snap in Linux:
$ sudo snap install helm --classic
Install Tiller on the Kubernetes cluster by typing the following command
$ helm init --upgrade
Copy the code
IO /kubernetes-helm/tiller by default, Helm installs and configures tiller on the Kubernetes cluster using the “gcr. IO /kubernetes-helm/tiller” image. And use “kubernetes-charts.storage.googleapis.com” as the default stable repository address. Since the domain names “gcr. IO “and “storage.googleapis.com” may not be accessible in China, Ali Cloud Container service provides a mirror site for this. Run the following command to configure the Helm using the image of the Ari cloud:
$helm init - upgrade - I registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.5.1 - stable - '08 - url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/chartsDelete the default source
$ helm repo remove stable
# Add new domestic mirror source
$ helm repo add stable https://burdenbear.github.io/kube-charts-mirror/
$ helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
# check whether the Helm source is added
$ helm repo list
Copy the code
Common commands for Helm are as follows:
# View all Helm Charts available in the repository
$ helm search
# Update the Charts list to get the latest version
$ helm repo update
View a Chart variable
$ helm inspect values stable/mysql
View the Charts list installed on the cluster
$ helm list
# Remove the deployment of a certain Charts
$ helm del --purge wordpress-test
Add authorization for Tiller deployment
$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
Copy the code
kubectl
Information retrieval
The get command is used to obtain information about one or several resources of a cluster. Use –help to view details. Kubectl’s help information, examples are quite detailed and easy to understand. I suggest you get used to using help messages. Kubectl lists all of the cluster’s resources in detail. Resource includes cluster nodes, running PODS, ReplicationController, and Service.
$ kubectl get [(-o|--output=)json|yaml|wide|go-template=...|go-template-file=...|jsonpath=...|jsonpath-file=...] (TYPE [NAME |-l label] | TYPE/NAME ...) [flags] [flags]
Copy the code
Operation and Management
Kubectl run, like Docker run, can run an image. We use Kubectl run to start a Sonarqube image.
$kubectl run sonarqube --image=sonarqube:5.6.5 --replicas=1 --port=9000 deployment"sonarqube" created
This command creates a Deployment for us
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sonarqube 1 1 1 1 5m
Copy the code
We can also run an image interactively:
$ kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
Copy the code
K8s runs the image inside a Pod to facilitate volume management and network sharing. Using Get Pods, you can clearly see that a Pod is generated:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sonarqube-1880671902-s3fdq 1/1 Running 0 6m $Interactively run a command in Pod $kubectlexec -it sonarqube-1880671902-s3fdq -- /bin/bash
Copy the code
Kubectl can be used to remove the created Deployment and Pod:
$ kubectl delete pods sonarqube-1880671902-s3fdq
$ kubectl delete deployment sonarqube
Copy the code
Kubectl universal can manage the application lifecycle based on Yaml files:
# to create
$ kubectl create -f yamls/mysql.yaml
# remove
$ kubectl delete -f yamls/mysql.yaml
Create multiple files at once
$ kubectl create -f yamls/
# delete multiple files at once
$ kubectl delete -f yamls/
Copy the code
Context switch
After the K8s cluster is installed, you can download the configuration file of the cluster to the local kubectl configuration:
mkdir $HOME/.kube
scp root@<master-public-ip>:/etc/kubernetes/kube.conf $HOME/.kube/config
Copy the code
You can then view the current context
$ unset KUBECONFIG
$ kubectl config current-context Check the current loading context
$ kubectl config get-contexts Browse the available context
$ kubectl config use-context context-name Switch to the specified context
Copy the code
Service configuration
In the Kubernetes Field/Typical Applications section, we introduced many common middleware configurations and deployments. This section uses a simple HTTP server as an example to describe the common service configuration process.
Deployment & Service
In K8s Boilerplates, we defined simple Nginx deployment and service for cluster construction and external service exposure respectively:
# nginx-deployment-service.yaml
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 3 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
selector:
app: nginx
type: NodePort
Copy the code
$ kubectl create -fhttps://raw.githubusercontent.com/wx-chevalier/Backend-Boilerplates/master/K8s/Base/nginx-deployment-service.yaml $ kubectl get pod NAME READY STATUS RESTARTS AGE nginx-56db997f77-2q6qz 1/1 Running 0 3m21s nginx-56db997f77-fv2zs 1/1 Running 0 3m21s nginx-56db997f77-wx2q5 1/1 Running 0 3m21s $ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE Nginx 3/3 3 3 3m36s $kubectl get SVC NAME TYPE cluster-ip external-ip PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 21h nginx NodePort 10.43.8.50 < None > 80:32356/TCP 4m5sCopy the code
Ingress
Ingress is a Kubernetes resource and a way to expose services within the Kubernetes cluster to the outside world. Ngress consists of an Ingress and an Ingress Controller. Ingress is used to abstract rules that originally need to be configured manually into an Ingress object, which is created and managed using YAML format files. The Ingress Controller is used to dynamically sense Ingress rule changes in the cluster by interacting with the Kubernetes API.
There are many Ingress Controller types available, such as Nginx, HAProxy, Traefik, etc. Nginx Ingress uses ConfigMap to manage Nginx configuration.
Helm installation Ingress
$ helm install --name nginx-ingress --set "Rbac. The create = true, the controller service. ExternalIPs [0] = 172.19.157.1, controller. Service. ExternalIPs [1] = 172.19.157.2, controll Er. Service. $xternalIPs [2] = 172.19.157.3"stable/nginx-ingress NAME: nginx-ingress LAST DEPLOYED: Tue Aug 20 14:50:13 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nginx-ingress-controller 1 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE nginx-ingress-controller-5f874f7bf4-nvsvv 0/1 ContainerCreating 0 0s nginx-ingress-default-backend-6f598d9c4c-vj4v8 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-controller LoadBalancer 10.43.115.59 172.19.157.1 172.19.157.2, 172.19.157.3 80:32122 / TCP, 443:32312 / TCP 0 s nginx - ingress - default - backend ClusterIP 10.43.8.65 < None > 80/TCP 0s ==> v1/ServiceAccount NAME SECRETS AGE nginx-ingress 1 0s ==> v1Beta1 /ClusterRole NAME AGE nginx-ingress 0s ==> v1beta1/ClusterRoleBinding NAME AGE nginx-ingress 0s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx-ingress-controller 0/1 1 0 0s nginx-ingress-default-backend 0/1 1 0 0s ==> v1beta1/PodDisruptionBudget NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE nginx-ingress-controller 1 N/A 0 0s nginx-ingress-default-backend 1 N/A 0 0sCopy the code
After the deployment, we can see that nginx-Ingress-Controller and nginx-Ingress-default-Backend are added to Kubernetes. Nginx-ingress-controller is a layer 7 load balancer that provides HTTP routing, sticky sessions, SSL termination, SSL passthrough, TCP and UDP load balancing, and other functions. Nginx-ingress-default-backend is the default backend. When requests from outside the cluster are sent to the cluster through the ingress, the unknown requests are loaded to the default backend if the requests cannot be loaded to the corresponding backend Service.
$kubectl get SVC NAME TYPE cluster-ip external-ip PORT(S) AGE Kubernetes ClusterIP 10.43.0.1 < None > 443/TCP 20h Nginx ingress - controller LoadBalancer 10.43.115.59 172.19.157.1, 172.19.157.2, 172.19.157.3 80:32122 / TCP, 443:32312 / TCP 77 m Nginx-ingress-default-backend ClusterIP 10.43.8.65 < None > 80/TCP 77m $kubectl --namespace default get services -o wide -w nginx-ingress-controller NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx-ingress-controller LoadBalancer 10.43.115.59 172.19.157.1 172.19.157.2, 172.19.157.3 80:32122 / TCP, 443:32312 / TCP 77 m app=nginx-ingress,component=controller,release=nginx-ingressCopy the code
Because we used externalIP to expose the service, nginx-Ingress-Controller will expose port 80/443 on the three node hosts. We can access it on any node, because we have not yet created the Ingress resource in the Kubernetes cluster, so direct requests to ExternalIP are loaded on nginx-ingress-default-Backend. Nginx-ingress-default-backend provides two urls for access by default. /healthz for health check returns 200, and/returns 404.
$ curl 127.0.0.1/
# default backend - 404$curl 127.0.0.1 / healthz /# returns 200
Copy the code
If we need to create our own Ingress configuration, we can refer to the following methods:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls
Copy the code
If you want to use TLS, you need to create Secret that contains the certificate and Key:
apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
Copy the code
WordPress
After Helm is installed, let’s test and deploy a WordPress application:
$ helm install --name wordpress-test --set "ingress.enabled=true,persistence.enabled=false,mariadb.persistence.enabled=false" stable/wordpress
NAME: wordpress-test
...
Copy the code
Here we use Ingress load balancing to access the service as follows:
$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE wordpress.local-wordpress-test wordpress.local 172.19.157.1 172.19.157.2, 80 m $59 172.19.157.3 curl -i http://wordpress.local - x 127.0.0.1:80 HTTP / 1.1 200 OK Server: Nginx /1.15.6 Date: Tue, 20 Aug 2019 07:55:21 GMT Content-Type: text/ HTML; Charset =UTF-8 Connection: keep-alive Vary: accept-encoding X-POWERed-by: PHP/7.0.27 Link: <http://wordpress.local/wp-json/>; rel="https://api.w.org/"
Copy the code
You can also follow the Charts instructions to obtain the administrator user and password for the WordPress site using the following command:
echo Username: user
echo Password: $(kubectl get secret --namespace default wordpress-test-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)
==> v1beta1/Role
NAME AGE
nginx-ingress 0s
==> v1beta1/RoleBinding
NAME AGE
nginx-ingress 0s
Copy the code
read
The technical road of a bear refers to north ☯ is the navigation and index of the knowledge warehouse deposited by the author in different fields, which is convenient for readers to quickly find the content they need. I see a long way to go, I am searching up and down, I also hope to give some help to all students who have met the traces of the author, in the vast galaxy can smoothly reach another shore.
You can read the author’s series of articles in Gitbook through the following navigation, covering a variety of fields, such as technical summary, programming language and theory, Web and big Front end, server-side development and infrastructure, cloud computing and big Data, data science and artificial intelligence, product design and so on:
-
Knowledge system: The Awesome Lists | CS data collection “, “Awesome CheatSheets | learn quick speed manual”, “Awesome Interviews | necessary job interview”, “Awesome RoadMaps | Programmers advanced guide “, “Awesome MindMaps | knowledge context mind tu”, “Awesome – CS – Books | open source Books (. PDF)”
-
Programming languages: Programming Language Theory, Java Field, JavaScript Field, Go Field, Python Field, Rust Field
-
Software Engineering, Patterns and Architecture: Programming Paradigms and Design Patterns, Data Structures and Algorithms, Software Architecture Design, Neatness and Refactoring, R&D Methods and Tools
-
Web and Big Front End: Modern Web Development Fundamentals and Engineering Practice, Data Visualization, iOS, Android, Hybrid Development and Cross-End Applications
-
Server development practice and Engineering architecture: Server Basics, Microservices and Cloud Native, Testing and High Availability Assurance, DevOps, Node, Spring, Information Security and Penetration Testing
-
Distributed Infrastructure: Distributed Systems, Distributed Computing, Databases, Networks, Virtualization and Choreography, Cloud Computing and Big Data, Linux and Operating Systems
-
Data Science, Artificial Intelligence and Deep Learning: Mathematical Statistics, Data Analysis, Machine Learning, Deep Learning, Natural Language Processing, Tools and Engineering, Industry Applications
-
Product Design and User Experience: Product Design, Interactive Experience, Project Management
-
Industry application: “Industry myth”, “Functional Domain”, “e-commerce”, “Intelligent Manufacturing”
In addition, you can go to xCompass to interactively search for articles/links/books/courses; Or view more detailed directory navigation information such as the article and project source code in the MATRIX Article and code Index MATRIX. Finally, you can also follow the wechat official account “A Bear’s Technical Path” to get the latest information.