This is the 22nd day of my participation in Gwen Challenge
Container resource limitation
In K8s, the node running Pod must meet the basic conditions required for Pod operation, that is, the CPU/MEM can meet the minimum resource limit required for Pod operation.
Containers have no kernel. By default, the container has no resource constraints and can use as many given resources as the host kernel scheduler allows; If container resources are not restricted, containers affect each other. Some containers that occupy high hardware resources swallow up all hardware resources. As a result, other containers have no hardware resources available and the service stops. Docker provides methods to limit memory, CPU, or disk IO, and to limit the size and amount of hardware resources used by the container. We can limit the hardware resources of the container when we use Docker Create or Docker run to create a container.
-
Initial value requests minimum guarantee
-
End value limits Hard limits
- CPU
1 CPU = 1000 millicores 0.5 CPU = 500 MCopy the code
- memory
Ei, Pi, Ti, Gi, Mi, KiCopy the code
19.1 Resource Restrictions
- Listing format, see: kubectl explain the pods. Spec. Containers. The resources
resources <Object> # Resource constraints
limits <map[string]string> # Maximum resource limit
cpu <string> # unit of m
memory <string> Unit Gi, Mi
requests <map[string]string> # Minimum resource requirements
cpu <string> # unit of m
memory <string> Unit Gi, Mi
Copy the code
- In the listing example, the Node node has a CPU of 12 cores and CPU limits set to 1000M, which is allowed
apiVersion: v1
kind: Pod
metadata:
name: pod-resources-demo
namespace: default
labels:
app: myapp
tier: frontend
spec:
containers:
- name: nginx
image: ikubernetes/stress-ng
command:
- "/usr/bin/stress-ng"
#- "-m 1" #
- "-c 1" Load the CPU with a single thread
- "--metrics-brief"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
resources:
requests:
cpu: 1000m # It decides which hosts to eliminate during the pre-selection phase
memory: 512Mi
limits:
cpu: 1000m # indicates that the container is limited to one node CPU. No matter how many processes are used, the container can only use one node CPU
memory: 512Mi
Copy the code
- View the results
Mem: 855392K used, 139916K free, 10188K shrd, 796K buff, 350368K cached
CPU0: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU1: 100% usr 0% sys 0% nic 0% idle 0% io 0% irq 0% sirq Fill up one CPU
CPU2: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU3: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU4: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU5: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU6: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU7: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU8: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU9: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU10: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
CPU11: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
Load average: 0.84 0.50 0.40 3/ 485 11
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
6 1 root R 6888 1% 1 8% {stress-ng-cpu} /usr/bin/stress-ng -c 1 --metrics-brief
1 0 root S 6244 1% 10 0% /usr/bin/stress-ng -c 1 --metrics-brief
7 0 root R 1504 0% 11 0% top
Copy the code
19.2 qos Quality Management
- GuranteedW
Limits = cpu.limits = cpu.requests memory.limits = memory.requestsCopy the code
- Burstable
At least one container will have medium priority if it sets the Requests attribute for CPU or memory resourcesCopy the code
- BestEffort
None of the containers with requests or limits attributes will have the lowest priority, and when resources run low, that container may be the first to terminate to free up resources for Burstable and GuranteedCopy the code
- Oom strategy
Kill the one with the highest proportion of usage to demand firstCopy the code
other
Send your notes to: github.com/redhatxl/aw… Welcome one button three links