Kubernetes, or K8S for short, is an open source project released by Google in 2014.
What problems did Kubernetes solve?
Real production applications will contain multiple containers, and these containers will most likely be deployed across multiple server hosts. Kubernetes provides choreography and management capabilities for large-scale deployment of containers for those workloads. Kubernetes choreography allows you to build multi-container application services, schedule or scale those containers on a cluster, and manage their health over time.
- Kubernetes basis
- Kubernetes optimization
- Kubernetes of actual combat
Kubernetes basic concepts are many, the first contact is easy to dizzy, if it is a novice, it is recommended to see the actual combat part directly, first run again.
Kubernetes basis
There are several important concepts in k8S.
concept | introduce |
---|---|
cluster | A K8S cluster |
master | A machine in a cluster is the core of the cluster, responsible for the management and control of the entire cluster |
node | A machine in a cluster is a workload node in the cluster |
pod | K8s minimum dispatch unit, each POD contains one or more containers |
label | A key = value key-value pair that can be attached to various resource objects to facilitate matching by other resources |
controller | K8s manages pods through controllers |
service | Provides a unified service externally that automatically distributes requests to the correct POD |
namespace | A cluster is logically divided into multiple virtual clusters, each of which is a namespace |
Cluster
Cluster represents a K8S Cluster. It is a collection of computing, storage, and network resources that K8S uses to run various container-based applications.
The Master node
The Master is the brain of the cluster, running services including KuBE-Apiserver, KuBE-Scheduler, KuBE-Controller – Manager, ETCD and POD network.
- kube-apiserver
- Apiserver is the front-end interface of the K8S Cluster and provides restful apis. It allows various client tools and other K8S components to manage various resources in a cluster.
- kube-scheduler
- Responsible for deciding which node to run pod on, scheduler takes into full consideration the load of each node in the cluster as well as the requirements of applications for high availability, performance and data affinity during scheduling.
- kube-controller-manager
- Responsible for managing cluster resources to ensure the expected status of resource processing. Controller-manager consists of several controllers, including replication Controller, Endpoint Controller, Namespace Controller, Serviceaccount controller, etc.
- Different controllers manage different resources, for example: The Replication Controller manages the lifecycle of Deployment, StateFulset, and Damonset, and the Namespace Controller resource manages namespace resources.
- etcd(Distributed key-value repository)
- It is responsible for saving cluster configuration information and status information of various resources. Etcd quickly notifies k8S components when data changes.
- Pod network
- For pods to communicate, clusters must deploy pod networks, and Flannel is one of the options.
The Node Node
The Node Node is where the POD runs. The K8S components running on the Node include Kubelet, Kube-Proxy and POD network.
- kubelet
- Kubelet is the agent of node. When the KuBE-Scheduler of the master node determines to run pod on a node, it sends the configuration information of POD to kubelet of the node. Kubelet creates and runs containers based on this information and reports the health status to the master node.
- kube-proxy
- Each node runs the Kube-Proxy service, which forwards requests to access the Service to the container at the back end. Kube-proxy implements load balancing if there are multiple replicas.
- Pod network
- For pods to communicate with each other, K8S clusters must deploy pod networks, and flannel is one of the options.
Pod
Each POD contains one or more containers. The containers in the POD as a whole are scheduled by the master to run on the Node node.
Why does K8S use PODS to manage containers instead of manipulating them directly?
1. Because some containers are inherently close to each other, and a pod represents a complete service, K8S also adds pause containers to each pod to manage the state of the internal container group.
2. All containers in Pod share IP and volume, facilitating communication and data sharing among containers.
When do YOU need to define multiple containers in a POD?
A: These containers are very closely related and need to share resources directly, such as a crawler and a Web Server application. Web Server relies heavily on crawlers for data support.
Label
Label is a key = value key-value pair, where key and value are specified by the user.
Usage scenarios of Label:
- kube-controller
- Select pod copies to monitor through label selecter to make the number of POD copies meet expectations.
- kube-proxy
- The label selecter selects the corresponding POD, and automatically establishes the request forwarding routing table from each service to the corresponding POD, so as to realize the intelligent load balancing mechanism of service.
- The Kube-Scheduler process can implement poD-oriented scheduling by defining a label for nodes and using node selector matches in a POD.
In a word, label can create multiple groups of labels for objects. Label and Label Selecter jointly build the core application model in K8S system, enabling managed objects to be managed in fine groups and realizing the high availability of the whole cluster.
Controller
K8s typically does not create pods directly, but manages them through controllers. The Controller defines the pod’s deployment characteristics, such as how many copies there are, what nodes to run on, and so on. To meet different business scenarios, K8S provides several types of controllers.
- Deployment
- Most commonly used, it can manage multiple copies of pods and ensure that the pods behave as expected, with ReplicaSet called underneath.
- ReplicaSet
- For multiple copy management of PODS, Deployment is usually sufficient.
- DaemonSet
- For scenarios where each node runs at most one pod copy.
- Usage scenarios
- Run storage daemons, such as Glusterd or Ceph, on each node of the cluster.
- Run log collection daemons on each node, such as Flunentd or Logstash.
- Run monitoring on each Node, such as Prometheus Node export or COLLECTED.
- StatefuleSet
- The ability to ensure that each copy of a POD is named the same throughout its life, which other controllers do not provide.
- Job
- Used for applications that are deleted at the end of running, whereas pods in other Controllers are typically run continuously for long periods.
Prompt the use case created using the Deployment Controller, and if a pod fails, a new pod will be automatically created to maintain the number of pods in the configuration.
Job
Containers can be divided into two types based on duration: service containers and work containers.
Service class containers typically provide services continuously and need to be running all the time, such as HTTP Server. A workclass container is a one-time task, such as a batch program, that exits after completion.
The Deployment, replicaSet, and daemonSet types in Controller are all used to manage service-class containers, and for work-class containers we use jobs.
Service
A Service is a policy that can access a set of PODS, often referred to as microservices. The specific access to the pod set is matched by the label. Service provides load balancing for pods using iptables.
Why service?
- Pods have a life cycle. They can be created or destroyed, but once destroyed their life ends forever. Pods in a K8S cluster can be created and destroyed frequently, generating a new IP address with each rebuild.
- A service logically represents a set of pods, which are selected by the Label. The client only needs to access the IP address of the Service. K8s is responsible for establishing and maintaining the mapping relationship between the Service and POD. No matter how the POD changes, it has no impact on the client. Because service doesn’t change.
How do EXtranets access service?
- ClusterIP: Exposes the service through the internal IP address of the cluster. If you select this value, the service can only be accessed within the cluster. This is also the default ServiceType.
- NodePort: Exposes services by IP and port (NodePort) on each node. The NodePort service is routed to the ClusterIP service, which is automatically created. By request
nodeip:nodeport
You can access a service service from outside the cluster. - LoadBalancer: use the cloud provider’s LoadBalancer to expose services externally. External load balancers can route to the NodePort service and ClusterIP service.
- ExternalName: By returning CNAME and its value, the service can be mapped to the contents of the ExternalName field (for example, foo.bar.example.com). No proxy of any kind was created, which is supported only by KUbe-DNS of K8S 1.7 or later.
Namespace
If you have multiple users using the same K8S cluster, how do you separate the controllers, pods, and other resources they create?
A: Use namespace.
If a physical cluster is logically divided into multiple virtual clusters, each cluster is a namespace, and resources in different namespaces are completely isolated.
K8s creates two namespaces by default.
- Default If you do not specify a namespace when creating resources, the namespace will be added to the namespace.
- Kube-system stores system resources created by K8S.
Kubernetes optimization
- Health check
- Data management
- Password management
- The cluster monitoring
Health check
Powerful self-healing is an important feature of container choreography engines like K8S. The default implementation of self-healing is to automatically restart the failed container. In addition, users can use LIVENESS and Readiness detection mechanisms to set up more sophisticated health checks to meet the following requirements:
- Zero downtime deployment
- Avoid deploying invalid images
- More secure rolling upgrades
By default, K8S will assume that the container has failed and needs to be restarted only if the container process returns a non-zero value. If we want to control container restarts more granular, we can use LiVENESS and Readiness.
Liveness and Readiness periodically check whether/TMP/HEALTHY files exist. If yes, the program is not faulty. If no, actions are taken to respond.
Liveness took the strategy of restarting the container, while Readiness took the strategy of making the container unavailable.
You can use liVENESS if you need to restart the container in certain circumstances. If you need to ensure that the container is always available for service, use Readiness.
Liveness and Readiness can be used together to determine whether a container needs to be restarted and readiness to provide services.
Data management
As mentioned above, pods may be destroyed and created frequently, and when the container is destroyed, the data stored in the file system inside the container is erased. To persist the container’s data, use K8S Volume.
The life cycle of a Volume is independent of the container. Containers in a POD may be destroyed and rebuilt, but the Volume is retained. Vloume is essentially a directory. When a volume is mounted to a POD, all containers in the POD can access the volume.
Volume supports multiple types.
- emptyDir
- The data is stored in the POD and is persistent to the container in the POD as long as the POD is there.
- hostPath
- The data is stored on the host. The data is stored on the host.
- AWS Elastic Block Store
- Data is stored on the cloud server.
- Persistent Volume
- PersistentVolumeClaim (PVC) is used to claim space and store it when creating a POD.
Volume provides various storage modes. When a container uses Volume to read or write data, it does not need to care whether the data is stored on a local node or on a cloud disk. To a container, all types of volumes are just a directory.
Password management
The application may require sensitive information during startup, such as the user name and password to access the database. Keeping this information directly in a container image is obviously not a good idea, and the solution k8S offers is Secret.
Secret stores data in ciphertext mode, avoiding storing sensitive information directly in the configuration file. Secret is mounted to the POD as a volume, and the sensitive data in Secret can be used as a file by the container, or as an environment variable by the container.
Create mysecret.yaml using the configuration file:
apiVersion: v1
kind Secret
metadata:
name:mysecret
data:
username:admin
password:123
Copy the code
After saving the configuration file, then execute kubectl apply -f mysecret.yaml to create it.
Use the created secret in pod:
# mypod.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: yhlben/notepad
volumeMounts:
- name: foo
mountPath: 'etc/foo'
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
Copy the code
Execute kubectl apply -f mypod.yaml to create the pod and use secret. After being created, secret is saved to /etc/foo/username and /etc/foo/password in the container.
The cluster monitoring
Creating a K8S cluster and deploying containerized applications is just the first step. Once the cluster is up and running, we need to make sure everything is okay in the cluster, which requires monitoring of the cluster.
Common visual monitoring tools are as follows.
- Weave Scope
- Heapster
- Prometheus Operator
Specific steps to use the document directly, here is not detailed.
Cluster monitoring enables us to detect cluster problems in time, but logs are required to facilitate further troubleshooting.
The common log management tools are as follows.
- Elasticsearch stores logs and provides a query interface.
- Fluentd collects logs from K8S and sends them to Elasticsearch.
- Kibana offers a visual page that allows users to browse and search logs.
Kubernetes of actual combat
Let’s deploy a K8S Notepad project. The project is built using the Yhlben/Notepad image. After deployment, the image will provide a Web version of Notepad service on port 8083.
To avoid the various pothole associated with installing K8S, Play with Kubernetes is used for the demo.
Start by creating 3 servers on Play with Kubernetes, with Node1 as the master node and Node2 and Node3 as the working nodes. Next, do the following;
- Create a cluster cluster
- Adding a Node
- Example Initialize the cluster network
- Create a controller
- Create a service
- Perform the deployment
Create a cluster cluster
To create a cluster, run kubeadm init on node1.
kubeadm init --apiserver-advertise-address $(hostname -i)
Copy the code
After the execution is complete, the token is generated so that other nodes can use the token to join the cluster.
Adding a Node
On node2 and node3 machines, run the following commands respectively to join the cluster on Node1.
Kubeadm join 192.168.0.8:6443 --token nfs9d0.z7ibv3Xokif1mnmv \ --discovery-token-ca-cert-hash sha256:6587f474ae1543b38954b0e560832ff5b7c67f79e1d464e7f59e33b0fefd6548Copy the code
After the command is executed, node2 and node3 are added successfully.
Viewing Cluster Status
On node1, run the following command.
kubectl get node
Copy the code
Node1, node2, and node3 exist in the cluster. However, these three nodes are in the NotReady state.
A: Because the cluster network is not created.
Creating a Cluster Network
Execute the following code to create a clustered network.
kubectl apply -n kube-system -f \
"https://cloud.weave.works/k8s/net? k8s-version=$(kubectl version | base64 |tr -d '\n')"
Copy the code
After executing the command, wait for a moment and check the node status. You can see that all three nodes in the cluster are Ready.
Create a Deployment
To create deployment from the configuration file, create a new deployment.yaml file that looks like this:
Config file format version
apiVersion: apps/v1
The type of resource to create
kind: Deployment
Metadata for resources
metadata:
name: notepad
# Specification
spec:
# define pod number
replicas: 3
Find the pod by label
selector:
matchLabels:
app: mytest
Define the pod template
template:
# Pod metadata
metadata:
Define the label of pod
labels:
app: mytest
Describe pod specifications
spec:
containers:
- name: notepad
image: yhlben/notepad
ports:
- containerPort: 8083
Copy the code
After the file is created, run the following command:
kubectl apply -f ./deployment.yaml
# deployment.apps/notepad created
Copy the code
View the deployed POD.
As you can see, we have created three pods using a Deployment-type controller, one running on node2 and node3 machines, and if there are more machines, the load will also be automatically balanced and allocated reasonably to each machine.
Create a Service
Create a service similar to deployment. Create a service.yaml file with the following contents:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
Deploy the port to access pod on the node
type: NodePort
ports:
- port: 80
# service proxy port
targetPort: 8083
The port on the node that provides external access to the cluster
nodePort: 30036
# to match the pod
selector:
app: mytest
Copy the code
After the file is created, run the following command:
kubectl apply -f ./service.yaml
# service/my-service created
Copy the code
Viewing the Creation Result
Use kubectl Get Deployment and Kubectl Get Service to see the result of the creation.
As you can see, both deployment and service are created successfully, and we know that the IP address exposed by the service is: 10.106.74.65 at port 80, the service will automatically forward the request to pod on port 8083 due to targetPort.
Accessing deployment Results
After successful deployment, we can access it in two ways:
- Intra-cluster: Access through the clusterIp + port of service.
- Outside the cluster: Access through any node + nodePort.
# Intra-cluster accessThe curl 10.106.74.65# Out-of-cluster access
# 192.168.0.12 is the IP address of node2
# 192.168.0.11 is the IP address of node3The curl 192.168.0.12:30036 curl 192.168.0.11:30036Copy the code
Intra-cluster access.
Out-of-cluster access.
Here, has been successful deployment, we must have questions, deployment of such a simple Web application is so troublesome, in the end WHERE is k8S good? Let’s move on.
K8s operations
The project has been deployed. Next, we will practice operation and maintenance to feel the convenience brought to us by K8S.
Case 1
The company needs at least 100 containers to meet users’ requirements for the Double 11 event. What should we do?
First of all, the current server resources should be used as much as possible to create more containers to participate in load balancing. Docker Stats can be used to check the system resources occupied by containers. If the demand still cannot be satisfied after full utilization, it can calculate how many machines need to be purchased according to the remaining containers needed to realize the rational utilization of resources.
- Purchase a server, use it as a node node, and join it to the cluster.
- Run the capacity expansion command.
Execute the following command to expand to 100 containers.
kubectl scale deployments/notepad --replicas=100
Copy the code
As shown in the figure, node2 and Node3 nodes share the load when extending 10 pods.
The extension can also be performed by modifying the replicas field in deployment.yaml and executing kubectl apply -f deployment.yaml. If the activity ends, simply remove the extra servers and reduce the number of containers to restore the previous effect.
Case 2
Double 11 activity is very popular, but suddenly there is a demand, we need to go online urgently, how to achieve rolling update?
Rolling updates are code updates without downtime. Run the following command to modify the image.
kubectl set image deployments/notepad notepad=yhlben/notepad:new
Copy the code
The upgrade can also be performed by modifying the image field in deployment.yaml and executing kubectl apply -f deployment.yaml.
If something goes wrong with the update, the K8S has a built-in one-button restore command for the previous version:
kubectl rollout undo deployments/notepad
Copy the code
Through these two cases, I feel that K8S brings great convenience to operation and maintenance: rapid and efficient deployment of projects, support dynamic expansion, rolling upgrade, and optimize the use of hardware resources as needed.
conclusion
The goal of this article is to get started with K8S, through a simple cluster, but there are a lot of pitfalls, as follows:
- Build projects using Minikube
- Can quickly build a single node K8S environment.
- However, using minikube locally to build a SET of K8S cluster, a lot of packaging is not up, the global agent is not.
- Use a server on Google Clould
- On Gogole Clould, the network problem was solved so that all packages needed to be installed could be installed.
- However, because it is a new server, it needs various installation environments, such as Docker, Kubeadm, kubectl, etc., the installation process is cumbersome, and errors may be encountered.
- Do not know which day hand slip once, trial account became pay account, give gold $300 so did not ðŸ˜.
- Use Play with Kubernetes
- The k8S environment is not needed, it is already installed, which is good.
- The capacity of a single machine is too small, and a slightly larger image can not be installed.
- When I use Play with Kubernetes occasionally, I get kicked out of the game as soon as I create an instance.
- If the public IP address is not provided, the Internet access cannot be verified.
Finally, I recommend the book “5 minutes a day to play Kubernetes”, a very suitable for the introduction of k8S actual combat book. Through a lot of simple actual combat in the book, from easy to difficult, I really understand K8S. A lot of theoretical knowledge in this paper also comes from this book.
For more articles, welcome to Star and subscribe to my blog.