What is a controller
There are a number of controllers built into Kubernetes, which act as a state machine to control the specific state and behavior of the Pod
Type of controller
ReplicationController and ReplicaSet
ReplicationController(RC) is used to ensure that the number of copies of container applications is maintained to the user-defined number at all times. If a container exits unexpectedly, a new Pod is created to replace it, and the extra containers are automatically reclaimed.
ReplicaSet is proposed to replace ReplicationController in the new version of Kubernetes. ReplicaSet is not fundamentally different from ReplicationController except for its name. And ReplicaSet supports a set of selectors.
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: frontend
spec:
replicas: 3
selector:
natchLatels:
tier: frontend
template:
metadata:
latels:
tier: frontend
sepc:
containsers:
- name: myapp
image: myapp:v1
env:
- name: GET_HOST_FROM
values: dns
ports:
- containerPort: 80
Copy the code
Deployment
Deployment provides a declarative method for Pod and ReplicaSet to replace the previous ReplicationController class for convenient management applications. Typical application scenarios include:
- Define Deployment to create Pod and ReplicaSet
- Rolling updates and rolling back applications
- Capacity expansion and reduction
- Suspend and continue Deployment
apiVersion: extensions/v1beta1
kind:Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: myapp:v1
ports:
- containerPort: 80
Copy the code
capacity
kubectl scale deployment nginx-deployment --replicats 10
Copy the code
You can also set up automatic scaling for Deployment if the cluster supports HPA
kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
Copy the code
Updating the image is also relatively simple
Kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1Copy the code
The rollback
Kubectl rollout undo deployment/nginx-deployment kubectl rollout status deployments nginx-deployment # Kubectl rollout pause Deployment /nginx-deployment #Copy the code
DaemonSet
Daemon ensures that all or some nodes have a Pod copy running on them. When nodes join the cluster, a Pod is added to them. When nodes are removed from the cluster, these pods are also recovered
Some typical uses of DaemonSet:
- Run cluster storage daemons such as Glusterd and ceph
- Run log mobile daemons such as Fluentd and Logstash on each Node
- Run monitoring daemons on each Node, such as Prometheus Node Exporter, Colletcd, Datadog agent, New Relic agent, or Ganglia Gmond
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example # must match the name in selector. MatchLabels below, otherwise it will keep restarting
labels:
app: daemonset
spec:
selector:
matchLabels:
name: daemonset-example
template:
metadata:
labels:
name: daemonset-example
spec:
containers:
- name: daemonset-example
image: myapp:v1
Copy the code
StateFulSet
StateFulSet provides a unique identity for a Pod as a Controller, which ensures the order of deployment and scale
StateFulSet is designed to solve the problem of stateful services, and its counterparts in Deployment and ReplicaSet are designed for stateless services. The application scenarios include:
- Stable persistent storage, that is, Pod can access the same persistent data after rescheduling, based on PVC
- Stable network identification, i.e., the PodName and HostName remain the same after Pod is rescheduled, is based on Headless Service (i.e., Service without Cluster IP)
- Orderly deployment and orderly expansion, that is, pods are ordered, and the deployment or expansion should be carried out in sequence according to the defined order (i.e., from 0 to N-1, all previous pods must be Runing and Ready before the next Pod runs), which is implemented based on init containers
- Ordered shrinkage, ordered deletion (i.e. from n-1 to 0)
Job/CronJob
Job is responsible for batch tasks, that is, executing a task only once. It ensures that one or more PODS of the batch task are successfully terminated
CronJob Manages time-based jobs, that is:
- Run only once at a given time
- Periodically running at a given point in time
Typical application scenarios include database backup and email sending
Job
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
templata:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl"."-Mbignum=bpi"."-wle"."print bpi(2000)"]
restartPolicy: Never
Copy the code
CronJob
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *" # Mandatory fields
completions: 1 # Indicates the number of pods that need to be successfully run when the Job ends. The default value is 1
parallelism: 1 # Indicates the number of pods to run in parallel. Default is 1
activeDeadlineSeconds: 5 # indicates the maximum retry time for failed pods after which no further retries will be made
startingDeadlinSeconds: 5 The duration of starting the Job in seconds
concurrencyPolicy: Allow Allow (default) concurrent running Job, Forbid Forbid concurrent running Job, Replace
jobTemplate: # Mandatory fields
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date: echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Copy the code
Horizontal Pod Autoscaling
When the resource usage of an application has peaks and valleys, how can I adjust the number of pods in a service to improve the overall resource utilization of the cluster? This depends on the Horizontal Pod Autoscaling, or Pod’s automatic Horizontal scaling