The service on-line must top live pressure, carry live test, otherwise suffer to say or our brothers who do things, remember the scene above

Is the old service cluster deployment, but it has a limit, when working with ali before they can have a elastic computing by setting the CPU threshold to dynamic expansion and contraction capacity at that time the feeling is very pretend bility, at least in our regular practice is accomplished very hard at the time, have never thought today with Kubernetes we can also jubilant, Let me show you something for real.

Kubernetes automatic capacity expansion is for ReplicationController. It monitors the CPU usage of all Pods. If the CPU usage exceeds the ratio, it will start more Pods to provide service. However, to use auto-expansion, we need to set the threshold, so we also need to set the UPPER LIMIT of the CPU usage of the Pods. Otherwise, Kubernetes cannot calculate the ratio of CPU usage, so we need to specify the upper limit of CPU usage for each Pod when we create RC

configuration

apiVersion: v1 kind: ReplicationController metadata: name: thrift-c namespace: thrift-demo spec: replicas:1 selector: app: thrift-c template: metadata: name: thrift-c labels: app: thrift-c spec: containers: - name: thrift-c resources: Requests: CPU: 200 m image: registry2. IO/thrift/thrift - c: hundreds imagePullPolicy: Always ports: - containerPort: 9091 imagePullSecrets: - name: registry2keyCopy the code

Replicas = 1 and replicas = 1. Replicas = 1 and replicas = 1 and replicas = 1.

I’m going to look in Dashboard and see if it works,

Pressure test

The first step is to add automatic capacity expansion to our service. There are two methods, one is configuration file, the other is through Kubectl command to complete, look at the command

kubectl autoscale rc thrift-c -nthrift-demo --max=10 --cpu-percent=10 --min=1Copy the code

Parameter — Max is the maximum number of Pods, –min is the minimum, –cpu-percent is the threshold, and is a percentage. After running this command, RC Thrift -C is automatically expanded

You can see target and Current as well as quantity and so on. We don’t have any traffic right now

Step two, pressure, increase traffic and concurrency, gives you a simple command

while true; dowget -q -O- http://test.k8s.io/hi? name=3213;doneCopy the code

I executed this command on three separate machines, and the pressure skyrocketed,

Let’s look at the number of Pods when the CPU goes over 10%

The number of Pods will be increased rapidly from one to four. If the service capacity fails to meet the requirements, the number of Pods will be increased. When I terminated the pressure test command and lowered the CPU, Kubernetes reduced the number of Pods to the original one after a few minutes. It can respond to service pressure in a timely manner, and there is a delay in shrinkage to prevent it from happening again in a short time. After the CPU stabilizes for a period of time, the Pods are reduced to our configuration –min.

With this feature, we no longer have to worry about unexpected pressure and server downtime. The bottleneck is left to the network and disk IO and database. In case of server downtime, Kubernetes will move all its Pods to other nodes and start them up to ensure that your Pods are always running. But there is a point Kubernetes Master is a single point of operation, in the back if you have time to study the high availability of Master, in fact, there have been a lot of users to study and give a solution, if interested can first understand.