This is the 9th day of my participation in Gwen Challenge
Six POD configuration list
6.1 Pods. metadata POD Metadata
6.1.1 labels tag
- Labels define labels, labels made up of key-value pairs
labels:
app: myapp
tier: frontend
Copy the code
6.2 the pods. The spec specification
6.2.1 nodeName Running node
- When defining a POD with a resource manifest, nodeName is used to directly bind the node on which pod the resource object runs
apiVersion: v1
kind: Pod
metadata:
name: pod-deme
namespace: default
labels:
app: myapp
tier: frontend
spec:
nodeName: node2 # directly specify the node on which the POD will run
containers:
- name: myapp
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
Copy the code
6.2.2 nodeSelector Node Selection
- When defining a POD using a resource list, use the nodeSelector field to define the node’s orientation
apiVersion: v1
kind: Pod
metadata:
name: pod-deme
namespace: default
labels:
app: myapp
tier: frontend
spec:
nodeSelector: Define the node orientation of this POD in the spec
disktype: ssd The POD will eventually run on Nodes with a diskType tag and a value of SSD
containers:
- name: myapp
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
ports:
Copy the code
- Start the POD from the file and look at the node on which the POD is running. It is already running on the tagged node
kubectl create -f pod-demo.yaml
Copy the code
kubectl get pods -o wide
Copy the code
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES POd-demo 1/1 Running 0 21s 10.244.2.29node3 <none> <none>Copy the code
6.2.3 restartPolicy POD restartPolicy
Always: Whenever a container hangs, Always restart it. K8s restarts twice as many times as 30 seconds each time until waiting 300 seconds to restart.
OnFailure: Restart it only if its status is wrong
Never: Never restarts
Once a POD is scheduled on a node, it will not be rescheduled as long as the node is there. It will only be restarted unless the POD is deleted or the node hangs. Otherwise, the POD will not be rescheduled as long as the node is there. If the POD fails to start, the POD will be repeatedly restarted.Copy the code
When the POD needs to be terminated, k8S is sentkill-15 signal, let the container smoothly terminate, wait for a 30-second grace period, if not terminate, then sendkillsignalCopy the code
6.2.4 hostNetwork hostNetwork space
Use a Boolean value to specify whether to let POD use the host’s network namespace
6.2.5 hostPID PID space of a host
Use a Boolean value to specify whether to make POD use the PID namespace of the host
6.2.6 containers configuration
kubectl explain pods.spec.containers
Containers <[]Object>, indicating that its value is an array. The array uses the way objects are used to describe a container, which can have the following parameters:
- The available parameters
parameter | role |
---|---|
args | |
command | |
env | Pass environment variables to the container |
envFrom | |
image | |
imagePullPolicy | |
lifecycle | |
livenessProbe | |
name | |
ports | |
readinessProbe | |
resources | |
securityContext | |
stdin | |
stdinOnce | |
terminationMessagePath | |
terminationMessagePolicy | |
tty | |
volumeDevices | |
volumeMounts | |
workingDir |
- Sample configuration
apiVersion: v1
kind: Pod
metadata:
name: pod-deme # the name of pod
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp The name of the container to run
image: ikubernetes/myapp:v1 # mirror of container
imagePullPolicy: IfNotPresent Policy for getting images from the repository
ports: Define the port where the container leaks
- name: busybox
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "sleep 10"
Copy the code
6.2.6.1 imagePullPolicy Download policy
- ImagePullPolicy Policy for obtaining images. For details, see:
kubectl explain pods.spec.containers
Always Always download from the repository
Never # Never download, local have use, no fail
IfNotPresent # Use it if it exists locally, download it if it doesn't
Copy the code
If the label is Latest, always download from the repository
6.2.6.2 Ports Port Information
- Ports define the container to protect exposure, see:
kubectl explain pods.spec.containers.ports
The port exposed here provides the system with information about the container’s network connection, but it is primarily informational. There is no specified port here and it does not prevent the container from exposing this port. Any port in the container listening for address 0.0.0.0 is accessible from the network
ports: Define two port objects, one HTTP and one HTTPS
- name: http Define the name of this port so that other objects can reference it
containerPort: 80 # port
- name: https # easy to reference the name
containerPort: 443 # This port number is only used for informational purposes, so it is easy to view and use name references
Copy the code
6.2.6.3 env Passes environment variables
- Use Pod fields
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Copy the code
- Use Container fields
env:
- name: MY_CPU_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
- name: MY_CPU_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.cpu
- name: MY_MEM_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.memory
- name: MY_MEM_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.memory
Copy the code
Get in the container POD The information of
You can use environment variables
You can use downwardAPI
https://kubernetes.io/zh/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/
Copy the code
6.2.6.4 command ENTRYPOINT
- Command defines the program that the container runs. See:
Kubernetes. IO/docs/tasks /…
An entryPoint array command does not run in the Shell. If you want to run it in the Shell, you need to fill it in. If you do not provide this command, you will run entryPoint in the Docker image.
Image Entrypoint | Image Cmd | Container command | Container args | Command run |
---|---|---|---|---|
[/ep-1] |
[foo bar] |
[ep-1 foo bar] |
||
[/ep-1] |
[foo bar] |
[/ep-2] |
[ep-2] |
|
[/ep-1] |
[foo bar] |
[zoo boo] |
[ep-1 zoo boo] |
|
[/ep-1] |
[foo bar] |
[/ep-2] |
[zoo boo] |
[ep-2 zoo boo] |
6.2.6.5 args CMD
- Args passes parameters to command
If you do not define args and there are ENTRYPOINT directives and CMD directives in the image, the image’s own CMD will be passed to ENTRYPOINT as an argument. If args is specified manually, the CMD field in the image is no longer passed as a parameter.
If you reference a variable in arGS, you need to use $(VAR_NAME) to reference a variable. If you don’t want to do command substitution here, you can use $$(VAR_NAME) to escape and use it in the container.
env:
- name: MESSAGE
value: "hello kaliarch"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
Copy the code
Annotations 6.2.6.6 Annotations
Annotations differ from label in that they cannot be used to pick resource objects. They only provide metadata for objects, and their length is not limited
apiVersion: v1
kind: Pod
metadata:
name: pod-deme
namespace: default
labels:
app: myapp
tier: frontend
annotations: # Annotate keywords
kaliarch/created-by: "xuel" Add resource annotations for key-value pairs
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
imagePullPolicy: IfNotPresent
Copy the code
6.2.6.7 POD Life Cycle
- The general state of
Pending: Created but no node suitable for it, scheduled but not yet completed Running: Failed: Failed: Succeed: This state is short Unkown: Unknown state that Apiserver will be in if it fails to communicate with KubeletCopy the code
- Create POD phase
The user’s creation request is submitted to Apiserver, which stores the target state of the request in ETCD. Apiserver then requests Schedule for scheduling and updates the result of scheduling into THE POD state of ETCD. Then once saved in etCD and updated schedule, kubelet of the target node will know from the state change of ETCD that it has a new task for itself, so it will get the target state of the resource list that the user wants. Run the POD on the current node according to the list. If the creation succeeds or fails, The result is sent back to apiserver, which again stores it in etCD.
6.2.6.8 livenessProbe storage activity detection
Detailed see: kubectl explain the pods. Spec. Containers. LivenessProbe
- LivenessProbe/readinessProbe are two k8S lifecycles, both of which can define probes to detect container states and react differently
livenessProbe Indicates whether the container is running. If the survival probe fails, restart is performed according to the restartPolicy
readinessProbe Indicates whether the container is ready to service the request. If the ready probe fails, the endpoint controller removes the Pod's IP address from the endpoints of all services that match the Pod
Copy the code
- LivenessProbe/readinessProbe Available probes and probe properties. Only one type of probe can be defined, for example: HTTPGetAction
exec Execute the specified command in the container. If the return code is 0, the diagnosis is successful.
tcpSocket TCP checks the container IP address on the specified port. If the port is open, the diagnosis is considered successful.
httpGet # HTTP GET requests to specify ports and containers on paths. If the response code is greater than or equal to 200 and less than 400, the diagnosis is considered successful.
Copy the code
failureThreshold A probe must be tested several times before it is deemed to have failed.
periodSeconds The interval of each probe cycle.
timeoutSeconds Timeout to wait for results after each probe is issued. Default is 1 second.
initalDelaySeconds How long to delay probing after the container is started. Default is probing immediately after the container is started.
Copy the code
- Using the Exec probe, the experiment should result in POD showing ERROR after 39 seconds, but not automatically restarting POD
apiVersion: v1
kind: Pod
metadata:
name: execlive
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: busybox
image: busybox
command:
- "/bin/sh"
- "-c"
- "touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 3600" Create a file wait 30 seconds, the time probe should be successful, 30 seconds after the failure
livenessProbe: If the container fails, restart the POD according to restartPolicy
exec: # exec type probe that enters the container to execute a command
command: ["test"."-e" ,"/tmp/healthy"] Run the following command to test file existence
initialDelaySeconds: 2 # how long to delay probing after the container is started
periodSeconds: 3 The interval of each detection cycle is 3 seconds
failureThreshold: 3 # 3 failures will be judged as failure of container detection
restartPolicy: Never Whether to restart POD when detecting container failure
Copy the code
- Using the httpGet probe, the experimental results should be about 40 seconds after the detection of survivability fails and the POD automatically restarts, with the first restart immediately followed by a doubling of 30 seconds to 300 seconds.
apiVersion: v1
kind: Pod
metadata:
name: httpgetlive
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: nginx
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe: If the container fails, restart the POD according to restartPolicy
httpGet: # httpget probe
path: /error.html # probe page, for effect this page does not exist
port: http Port to probe, using a name to reference the container's port
httpHeaders: Set the request header when # httpget
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15 # how long to delay probing after the container is started
timeoutSeconds: 1 # How long to wait for results per probe
restartPolicy: Always Whether to restart POD when detecting container failure
Copy the code
6.2.6.9 readinessProbe Readiness check
For example, if a container is running Tomcat and Tomcat expands the WAR package, the deployment may take a long time to complete. By default, K8S will mark the read state when the container is started and receive service scheduling requests. However, when the container is started, It does not mean that Tomcat has been successfully run. Therefore, readinessProbe is required for readiness detection to determine whether the service can be accessed.
- LivenessProbe/readinessProbe The available probes have the same properties as the probes. Only one type of probe can be defined, for example, HTTPGetAction
livenessProbe Indicates whether the container is running. If the survival probe fails, restart is performed according to the restartPolicy
readinessProbe Indicates whether the container is ready to service the request. If the ready probe fails, the endpoint controller removes the Pod's IP address from the endpoints of all services that match the Pod
Copy the code
- Using the httpGet probe, the experimental results should be about 40 seconds after the detection of survivability fails and the POD automatically restarts, with the first restart immediately followed by a doubling of 30 seconds to 300 seconds.
apiVersion: v1
kind: Pod
metadata:
name: httpgetread
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: nginx
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe: If the container fails, restart the POD according to restartPolicy
httpGet: # httpget probe
path: /error.html # probe page, for effect this page does not exist
port: http Port to probe, using a name to reference the container's port
httpHeaders: Set the request header when # httpget
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15 # how long to delay probing after the container is started
timeoutSeconds: 1 # How long to wait for results per probe
restartPolicy: Always Whether to restart POD when detecting container failure
Copy the code
- Manually enter the container and remove index.html to trigger the readiness probe
kubectl exec -it httpgetread -- /bin/sh
$ rm -f /usr/share/nginx/html/index.html
Copy the code
- As a result, the POD’s READY state has become unready, and the Service will no longer be scheduled to this node
[root@node1 ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
httpgetread 0/1 Running 0 2m50s
Copy the code
- Create another file inside the container to trigger the readiness probe
kubectl exec -it httpgetread -- /bin/sh
$ echo "hello worlld" >>/usr/share/nginx/html/index.html
Copy the code
- The result is that the POD is programmed to be READY, and the Service will be scheduled to this node
[root@node1 ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
httpgetread 1/1 Running 0 8m15s
Copy the code
6.2.6.10 lifecycle Lifecycle Hooks
See: kubectl explain the pods. Spec. Containers. The lifecycle
postStart # command to execute immediately after the container is started. If this operation fails, the container terminates and restarts according to the restartPolicy
preStop Command to execute immediately before the container terminates
Copy the code
- Basic use of postStart/preStop
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh"."-c"."echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/usr/sbin/nginx"."-s"."quit"]
Copy the code
POD controller
The POD managed by the controller can be realized and the number of POD copies can be maintained automatically. It can realize the expansion and reduction of POD, but it cannot realize advanced functions such as rolling update.
The name of the | role |
---|---|
ReplicationController | The original K8S only has this kind of controller, which is now close to being abandoned |
ReplicaSet | Create a specified number of POD copies on behalf of the user, and also support scaling, known as the new generation of ReplicationController. It mainly consists of 3 indicators: 1. Copy of POD desired by users; 2. Label selector to determine whether POD is managed by itself; 3. If there are not enough REPLICas of pods, create a POD from the POD template, but generally we don’t use ReplicaSet directly. |
Deployment | Deployment implements its functions by controlling ReplicaSet. Besides scale-up of ReplicaSet, Deployment also supports rolling update and rollback, etc. It also provides declarative configuration, which is the most commonly used controller in our daily life. It is used to manage stateless applications. |
DaemonSet | This controller is used to ensure that only one POD is running on each node in the cluster, and that any new nodes automatically run the POD, so there is no need to define the number of pods to run, just the label selector and POD template. So you can run only one POD copy on a node selected according to the label selector. |
Job | After performing a one-time task, such as database backup, and exiting normally, the POD will not be started again unless the task terminates abnormally. |
CronJob | Perform periodic tasks |
StatefulSet | Manage stateful POD, but need to write scripts for each different stateful application to complete the management of stateful service, in order to solve the problem that StatefulSet is not convenient to write stateful application management. K8s also provides A YUM-like way to install a stateful application from the Helm marketplace. |
other
Send your notes to: github.com/redhatxl/aw… Welcome one button three links