This article has participated in the good article call order activity, click to see: back end, big front two track submission, 20,000 yuan prize pool for you to challenge
1 the VPA profile
VPA stands for Vertical Pod Autoscaler, which automatically sets CPU and memory requests based on container resource usage, allowing proper scheduling on the node to provide appropriate resources for each Pod. It can either shrink an over-requested container or increase the capacity of an under-resourced container at any time depending on its usage. PS: VPA does not change Pod resource limits.
Without further ado, go straight to the diagram above and see the VPA workflow
2 the deployment of the metrics – server
2.1 Downloading the Deployment Manifest file
[root@VM-10-48-centos ~]Wget # https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
Copy the code
2.2 Modifying the Components. Yaml file
- Changed the mirror address to:
Scofield/metrics - server: v0.3.7
- Modified the metrics-server startup parameter args
- name: metrics-server
image: Scofield/metrics - server: v0.3.7
imagePullPolicy: IfNotPresent
args:
- --cert-dir=/tmp
- --secure-port=4443
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Copy the code
2.3 Performing deployment
[root@VM-10-48-centos ~]# kubectl apply -f components.yaml
Copy the code
2.4 validation
[root@VM-10-48-centos ~]# kubectl get po -n kube-system | grep metrics-server
metrics-server-5b58f4df77-f7nks 1/1 Running 0 35d
# If you can get top information, consider it a success
[root@VM-10-48-centos ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
10.1.2.15 138m 3% 4207Mi 29%
10.1.2.16 159m 4% 3138Mi 45%
10.1.2.17 147m 3% 4118Mi 59%
10.1.50.2 82m 4% 1839Mi 55%
Copy the code
3 the deployment of vertical – pod – autoscaler
3.1 Cloning the AutoScaler project
[root@VM-10-48-centos ~]# git clone https://github.com/kubernetes/autoscaler.git
Copy the code
3.2 Modifying a Deployment File
[root@VM-10-48-centos ~]# cd autoscaler/vertical-pod-autoscaler/deployAdmission - controller - deployment. Yaml us. GCR. IO/k8s - artifacts - prod/autoscaling/vpa - admission - controller: 0.8.0 instead Scofield/vpa - admission - controller: 0.8.0 recommender - deployment. Yaml Us. GCR. IO/k8s - artifacts - prod/autoscaling/vpa - recommender: 0.8.0 to image: Scofield/vPA-recommender :0.8.0 updater-deploymen.yaml US.gcr. IO /k8s-artifacts-prod/ Autoscaling/vPA-recommdater :0.8.0 Scofield/vpa - updater: 0.8.0Copy the code
3.3 the deployment
[root@VM-10-48-centos ~]# cd autoscaler/vertical-pod-autoscaler
[root@VM-10-48-centos ~]# ./hack/vpa-up.shcustomresourcedefinition.apiextensions.k8s.io/verticalpodautoscalers.autoscaling.k8s.io created customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalercheckpoints.autoscaling.k8s.io created clusterrole.rbac.authorization.k8s.io/system:metrics-reader created clusterrole.rbac.authorization.k8s.io/system:vpa-actor created clusterrole.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created clusterrole.rbac.authorization.k8s.io/system:evictioner created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/system:vpa-actor created clusterrolebinding.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created clusterrole.rbac.authorization.k8s.io/system:vpa-target-reader created clusterrolebinding.rbac.authorization.k8s.io/system:vpa-target-reader-binding created clusterrolebinding.rbac.authorization.k8s.io/system:vpa-evictionter-binding created serviceaccount/vpa-admission-controller created clusterrole.rbac.authorization.k8s.io/system:vpa-admission-controller created clusterrolebinding.rbac.authorization.k8s.io/system:vpa-admission-controller created clusterrole.rbac.authorization.k8s.io/system:vpa-status-reader created clusterrolebinding.rbac.authorization.k8s.io/system:vpa-status-reader-binding created serviceaccount/vpa-updater created deployment.apps/vpa-updater created serviceaccount/vpa-recommender created deployment.apps/vpa-recommender created Generating certsfor the VPA Admission Controller in/tmp/vpa-certs. Generating RSA private key, 2048 bit long modulus (2 primes) ............................................................................ +++++ .+++++ e is 65537 (0x010001) Generating RSA private key, 2048 bit long modulus (2 primes) ............ + + + + +... +++++ e is 65537 (0x010001) Signature ok subject=CN = vpa-webhook.kube-system.svc Getting CA Private Key Uploading certs to the cluster. secret/vpa-tls-certs created Deleting /tmp/vpa-certs. deployment.apps/vpa-admission-controller created service/vpa-webhook createdCopy the code
ERROR: ERROR: Failed to create CA certificate for self-signing. If the error is “unknown option -addext”, Update your OpenSSL version or deploy VPA from the VPA-release-0.8 branch
You need to upgrade the OpenSSL version
[root@VM-10-48-centos ~]# yum install gcc gcc-c++ -y
[root@VM-10-48-centos ~]# openssl version -a
[root@VM-10-48-centos ~]Wget # https://www.openssl.org/source/openssl-1.1.1k.tar.gz && tar ZXF openssl - 1.1.1 k.t ar. Gz && CD openssl - 1.1.1 k
[root@VM-10-48-centos ~]# ./config
[root@VM-10-48-centos ~]# make && make install
[root@VM-10-48-centos ~]# mv /usr/local/bin/openssl /usr/local/bin/openssl.bak
[root@VM-10-48-centos ~]# mv apps/openssl /usr/local/bin
[root@VM-10-48-centos ~]# openssl version -aBuilt on: Mon Mar 29 23:48:12 2021 UTC Platform: Linux-x86_64 options: bn(64,64) rc4(16x,int) des(int) idea(int) blowfish(PTR) compiler: gcc -fPIC -pthread -m64 -Wa,--noexecstack -Wall -O3 -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -Wa,--noexecstack -Wa,--generate-missing-build-notes=yes -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_PIC -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DKECCAK1600_ASM -DRC4_ASM -DMD5_ASM -DAESNI_ASM -DVPAES_ASM -DGHASH_ASM -DECP_NISTZ256_ASM -DX25519_ASM -DPOLY1305_ASM -DZLIB -DNDEBUG -DPURIFY -DDEVRANDOM="\"/dev/urandom\""
OPENSSLDIR: "/etc/pki/tls"
ENGINESDIR: "/ usr/lib64 / engines - 1.1"
Seeding source: os-specific
Copy the code
Run the vertical-pod-autoscaler/ PKG /admission-controller/gencerts.sh command again
3.4 Viewing Results
You can see that both the metrics-server and vPA are working properly
[root@VM-10-48-centos ~]# kubectl get po -n kube-system | grep -E "metrics-server|vpa"
metrics-server-5b58f4df77-f7nks 1/1 Running 0 35d
vpa-admission-controller-7ff888c959-tvtmk 1/1 Running 0 104m
vpa-recommender-74f69c56cb-zmzwg 1/1 Running 0 104m
vpa-updater-79b88f9c55-m4xx5 1/1 Running 0 103m
Copy the code
4 sample
4.1 updateMode: Off
First, we deploy an Nginx service to namespace: VPA
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: vpa
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources:
requests:
cpu: 100m
memory: 250Mi
Copy the code
And if you look at the results, you’re running 2 pods
[root@VM-10-48-centos ~]# kubectl get po -n vpa
NAME READY STATUS RESTARTS AGE
nginx-59fdffd754-cb5dn 1/1 Running 0 8s
nginx-59fdffd754-cw8d7 1/1 Running 0 9s
Copy the code
2 Create a service of type NodePort
[root@VM-10-48-centos ~]# cat svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: vpa
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
app: nginx
[root@VM-10-48-centos ~]# kubectl get svc -n vpa | grep nginxNginx NodePort 10.255.253.166 < None > 80:30895/TCP 54s [root@VM-2-16-centos ~]# curl -I 10.1.2.16:30895HTTP/1.1 200 OK Server: nginx/1.21.1 Date: Fri, 09 Jul 2021 09:54:58 GMT Content-type: text/ HTML content-length: 612 Last-Modified: Tue, 06 Jul 2021 14:59:17 GMT Connection: keep-alive ETag:"60e46fc5-264"
Accept-Ranges: bytes
Copy the code
UpdateMode: “Off” mode. This mode only obtains resource recommendations, but does not update the Pod
[root@VM-10-48-centos ~]# cat nginx-vpa-demo.yaml
apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
name: nginx-vpa
namespace: vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: nginx
updatePolicy:
updateMode: "Off"
resourcePolicy:
containerPolicies:
- containerName: "nginx"
minAllowed:
cpu: "250m"
memory: "100Mi"
maxAllowed:
cpu: "2000m"
memory: "2048Mi"
Copy the code
4 View the deployment result
[root@VM-10-48-centos ~]# kubectl get vpa -n vpa
NAME MODE CPU MEM PROVIDED AGE
nginx-vpa Off 7s
Copy the code
5 Use Describe to check VPA details, focusing on Container Recommendations
[root@VM-10-48-centos ~]# kubectl describe vpa nginx-vpa -n vpa
Name: nginx-vpa
Namespace: vpa
Spec:
Resource Policy:
Container Policies:
Container Name: nginx
Max Allowed:
Cpu: 2000m
Memory: 2048Mi
Min Allowed:
Cpu: 250m
Memory: 100Mi
Target Ref:
API Version: apps/v1
Kind: Deployment
Name: nginx
Update Policy:
Update Mode: Off
Status:
Conditions:
Last Transition Time: 2021-07-09T09:59:50Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: nginx
Lower Bound:
Cpu: 250m
Memory: 262144k
Target:
Cpu: 250m
Memory: 262144k
Uncapped Target:
Cpu: 25m
Memory: 262144k
Upper Bound:
Cpu: 670m
Memory: 700542995
Copy the code
Among them
Recommended value Lower Bound: Lower limit value Target: Upper Bound: ceiling Uncapped Target: if you don't provide the VPA minimum or maximum boundary, the Target utilization The results show that the recommended Pod request for CPU25M, the recommended memory request is262144K bytes.Copy the code
6 Now perform a pressure test on nginx and run the pressure test command
[root@VM-10-48-centos ~]# ab-c 100-n 10000 http://10.1.2.16:30895/ This is ApacheBench, Version 2.3 <$Revision: 1430300 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 10.1.2.16 (be patient) Completed 1000000 requests Completed 2000000 requests Completed 3000000 requestsCopy the code
7 Observe the change of VPA Recommendation after a few minutes
[root@VM-10-48-centos ~]# kubectl describe vpa -n vpa nginx-vpa | tail -n 20
Conditions:
Last Transition Time: 2021-07-09T09:59:50Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: nginx
Lower Bound:
Cpu: 250m
Memory: 262144k
Target:
Cpu: 1643m
Memory: 262144k
Uncapped Target:
Cpu: 1643m
Memory: 262144k
Upper Bound:
Cpu: 2
Memory: 562581530
Events: <none>
Copy the code
As you can see from the output, VPA gives the recommended value for Pod: Cpu: 1643M, and since we set updateMode: “Off” here, we will not update Pod
4.2 updateMode: Auto
UpdateMode: “Auto” and see what VPA does. Here, change resources to: memory: 50Mi, CPU: 100M
[root@VM-10-48-centos ~]# kubectl get po -n vpa
NAME READY STATUS RESTARTS AGE
nginx-5594c66dc6-lzs67 1/1 Running 0 26s
nginx-5594c66dc6-zk6h9 1/1 Running 0 21s
Copy the code
Nginx-vpa-demo. yaml: updateMode: “Auto”
[root@k8s-node001 examples]# cat nginx-vpa-demo.yaml
apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
name: nginx-vpa-2
namespace: vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: nginx
updatePolicy:
updateMode: "Auto"
resourcePolicy:
containerPolicies:
- containerName: "nginx"
minAllowed:
cpu: "250m"
memory: "100Mi"
maxAllowed:
cpu: "2000m"
memory: "2048Mi"
Copy the code
3. Pressure test again
[root@VM-10-48-centos ~]# ab-c 100-n 10000000 http://10.1.2.16:30895/
Copy the code
4 A few minutes later, use Describe to view VPA details, also focusing on Container Recommendations
[root@VM-10-48-centos ~]# kubectl describe vpa nginx-vpa -n vpa | tail -n 20
Conditions:
Last Transition Time: 2021-07-09T09:59:50Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
Container Name: nginx
Lower Bound:
Cpu: 250m
Memory: 262144k
Target:
Cpu: 1643m
Memory: 262144k
Uncapped Target:
Cpu: 1643m
Memory: 262144k
Upper Bound:
Cpu: 2
Memory: 511550327
Events: <none>
Copy the code
Target: Cpu: 1643M, Memory: 262144K
5, look at the event
[root@VM-10-48-centos ~]# kubectl get event -n vpa
LAST SEEN TYPE REASON OBJECT MESSAGE
38s Normal Scheduled pod/nginx-5594c66dc6-d8d6h Successfully assigned vpa/nginx-5594c66dc6-d8d6h to 10.1.2.16
38s Normal Pulling pod/nginx-5594c66dc6-d8d6h Pulling image "nginx"
37s Normal Pulled pod/nginx-5594c66dc6-d8d6h Successfully pulled image "nginx"
37s Normal Created pod/nginx-5594c66dc6-d8d6h Created container nginx
37s Normal Started pod/nginx-5594c66dc6-d8d6h Started container nginx
3m10s Normal Scheduled pod/nginx-5594c66dc6-lzs67 Successfully assigned vpa/nginx-5594c66dc6-lzs67 to 10.1.2.15
3m9s Normal Pulling pod/nginx-5594c66dc6-lzs67 Pulling image "nginx"
3m5s Normal Pulled pod/nginx-5594c66dc6-lzs67 Successfully pulled image "nginx"
3m5s Normal Created pod/nginx-5594c66dc6-lzs67 Created container nginx
3m5s Normal Started pod/nginx-5594c66dc6-lzs67 Started container nginx
99s Normal EvictedByVPA pod/nginx-5594c66dc6-lzs67 Pod was evicted by VPA Updater to apply resource recommendation.
99s Normal Killing pod/nginx-5594c66dc6-lzs67 Stopping container nginx
98s Normal Scheduled pod/nginx-5594c66dc6-tdmnh Successfully assigned vpa/nginx-5594c66dc6-tdmnh to 10.1.2.15
98s Normal Pulling pod/nginx-5594c66dc6-tdmnh Pulling image "nginx"
97s Normal Pulled pod/nginx-5594c66dc6-tdmnh Successfully pulled image "nginx"
97s Normal Created pod/nginx-5594c66dc6-tdmnh Created container nginx
97s Normal Started pod/nginx-5594c66dc6-tdmnh Started container nginx
3m5s Normal Scheduled pod/nginx-5594c66dc6-zk6h9 Successfully assigned vpa/nginx-5594c66dc6-zk6h9 to 10.1.2.17
3m4s Normal Pulling pod/nginx-5594c66dc6-zk6h9 Pulling image "nginx"
3m Normal Pulled pod/nginx-5594c66dc6-zk6h9 Successfully pulled image "nginx"
2m59s Normal Created pod/nginx-5594c66dc6-zk6h9 Created container nginx
2m59s Normal Started pod/nginx-5594c66dc6-zk6h9 Started container nginx
39s Normal EvictedByVPA pod/nginx-5594c66dc6-zk6h9 Pod was evicted by VPA Updater to apply resource recommendation.
39s Normal Killing pod/nginx-5594c66dc6-zk6h9 Stopping container nginx
3m10s Normal SuccessfulCreate replicaset/nginx-5594c66dc6 Created pod: nginx-5594c66dc6-lzs67
3m5s Normal SuccessfulCreate replicaset/nginx-5594c66dc6 Created pod: nginx-5594c66dc6-zk6h9
99s Normal SuccessfulCreate replicaset/nginx-5594c66dc6 Created pod: nginx-5594c66dc6-tdmnh
38s Normal SuccessfulCreate replicaset/nginx-5594c66dc6 Created pod: nginx-5594c66dc6-d8d6h
35m Normal Scheduled pod/nginx-59fdffd754-cb5dn Successfully assigned vpa/nginx-59fdffd754-cb5dn to 10.1.2.16
35m Normal Pulling pod/nginx-59fdffd754-cb5dn Pulling image "nginx"
35m Normal Pulled pod/nginx-59fdffd754-cb5dn Successfully pulled image "nginx"
35m Normal Created pod/nginx-59fdffd754-cb5dn Created container nginx
35m Normal Started pod/nginx-59fdffd754-cb5dn Started container nginx
3m5s Normal Killing pod/nginx-59fdffd754-cb5dn Stopping container nginx
35m Normal Scheduled pod/nginx-59fdffd754-cw8d7 Successfully assigned vpa/nginx-59fdffd754-cw8d7 to 10.1.2.16
35m Normal Pulling pod/nginx-59fdffd754-cw8d7 Pulling image "nginx"
35m Normal Pulled pod/nginx-59fdffd754-cw8d7 Successfully pulled image "nginx"
35m Normal Created pod/nginx-59fdffd754-cw8d7 Created container nginx
35m Normal Started pod/nginx-59fdffd754-cw8d7 Started container nginx
2m58s Normal Killing pod/nginx-59fdffd754-cw8d7 Stopping container nginx
35m Normal SuccessfulCreate replicaset/nginx-59fdffd754 Created pod: nginx-59fdffd754-cw8d7
35m Normal SuccessfulCreate replicaset/nginx-59fdffd754 Created pod: nginx-59fdffd754-cb5dn
3m5s Normal SuccessfulDelete replicaset/nginx-59fdffd754 Deleted pod: nginx-59fdffd754-cb5dn
2m58s Normal SuccessfulDelete replicaset/nginx-59fdffd754 Deleted pod: nginx-59fdffd754-cw8d7
35m Normal ScalingReplicaSet deployment/nginx Scaled up replica set nginx-59fdffd754 to 2
34m Normal EnsuringService service/nginx Deleted Loadbalancer
34m Normal EnsureServiceSuccess service/nginx Service Sync Success. RetrunCode: S2000
3m10s Normal ScalingReplicaSet deployment/nginx Scaled up replica set nginx-5594c66dc6 to 1
3m5s Normal ScalingReplicaSet deployment/nginx Scaled down replica set nginx-59fdffd754 to 1
3m5s Normal ScalingReplicaSet deployment/nginx Scaled up replica set nginx-5594c66dc6 to 2
2m58s Normal ScalingReplicaSet deployment/nginx Scaled down replica set nginx-59fdffd754 to 0
Copy the code
Vpa executes EvictedByVPA, automatically stops nginx, and then starts the new nginx using the resources recommended by VPA
[root@VM-10-48-centos ~]# kubectl describe po -n vpa nginx-5594c66dc6-d8d6h
Name: nginx-5594c66dc6-d8d6h
Namespace: vpa
Priority: 0
Node: 10.1.2.16/10.1.2.16
Start Time: Fri, 09 Jul 2021 18:09:26 +0800
Labels: app=nginx
pod-template-hash=5594c66dc6
Annotations: tke.cloud.tencent.com/networks-status:
[{
"name": "tke-bridge"."interface": "eth0"."ips": [
"10.252.1.50"]."mac": "e6:38:26:0b:c5:97"."default": true."dns": {}
}]
vpaObservedContainers: nginx
vpaUpdates: Pod resources updated by nginx-vpa: container 0: cpu request, memory request
Status: Running
IP: 10.252.1.50
IPs:
IP: 10.252.1.50
Controlled By: ReplicaSet/nginx-5594c66dc6
Containers:
nginx:
Container ID: docker://42e45f5f122ba658e293395d78a073cfe51534c773f9419a179830fd6d1698ea
Image: nginx
Image ID: docker-pullable://nginx@sha256:8df46d7414eda82c2a8c9c50926545293811ae59f977825845dda7d558b4125b
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 09 Jul 2021 18:09:27 +0800
Ready: True
Restart Count: 0
Requests:
cpu: 1643m
memory: 262144k
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-m2j2z (ro)
Copy the code
Look at the key Requests: CPU: 1643M, memory: 262144K and then look at the deployment file
requests:
cpu: 100m
memory: 50Mi
Copy the code
Now you know what the VPA does. Of course, the VPA recommendations will change as the load on the service changes. When the resources of the currently running POD do not reach the recommended value of VPA, POD evocation is performed, redeploying new services with sufficient resources.
4.3 Restrictions on VPA use
- Can not be used with Horizontal Pod Autoscaler (HPA)
- Pod uses replica controllers such as those belonging to Deployment or StatefulSet
4.4 What are the benefits of VPA
- Pod resources are used as needed, so cluster nodes are used efficiently.
- The Pod is arranged on a node with the appropriate resources available.
- You do not have to run a benchmark task to determine the appropriate values for CPU and memory requests.
- VPA can adjust CPU and memory requests at any time without human intervention, thus reducing maintenance time.