Linkerd 2.10 series
- Linkerd V2 Service Mesh
- Tencent Cloud K8S deployment Service Mesh — Linkerd2 & Traefik2 deployment emojivoto application
- Learn about the basic features of Linkerd 2.10 and step into the era of Service Mesh
- Linkerd 2.10(Step by Step) — 1. Add your service to Linkerd
- Linkerd 2.10(Step by Step) — 2. Automated Canary publishing
- Linkerd 2.10(Step by Step) — 3. Automatic rotation control plane TLS and Webhook TLS credentials
Linkerd 2.10 中文 版
- linkerd.hacker-linner.com
The agent for the Linkerd data plane is multithreaded and can run a variable number of worker threads so that their resource usage matches the application workload.
Of course, in a vacuum, agents will show the best throughput and lowest latency when allowed to use as many CPU cores as possible. In practice, however, there are other factors to consider.
A real-world deployment is not a load test in which the client and server have nothing to do except saturate the agent with requests. Instead, the service grid model deploys the broker instance as a SidecAR for the application container. Each agent only handles traffic in and out of the POD it injects. This means that throughput and latency are limited by the application workload. If the application container instance can only handle so many requests per second, it may not matter that the agent can handle more requests. In fact, giving an agent more CPU cores than it needs to keep up with the application can hurt overall performance, as the application may have to compete with the agent for limited system resources.
Therefore, it is more important for a single agent to effectively handle its traffic than for all agents to be configured to handle the maximum possible load. The main way to adjust broker resource usage is to limit the number of worker threads that the broker uses to forward traffic. There are several ways to do this.
useproxy-cpu-limit
Annotation
The simplest way to configure a proxy thread pool is to use the config.linkerd. IO/proxy-CPU-limit annotation. This annotation configures the agent injector to set an environment variable that controls the number of CPU cores the agent will use.
When linkerd is installed using the Linkerd install CLI command, the –proxy-cpu-limit parameter sets this annotation globally for all agents injected by the Linkerd installation. For example,
linkerd install --proxy-cpu-limit 2 | kubectl apply -f -
Copy the code
163 For fine-grained configurations, annotations can be added to any injectable Kubernetes resource, such as namespace, POD, or Deployment.
For example, the following would configure an agent in a My-Deployment deployment to use 1 CPU kernel:
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-deployment
#...
spec:
template:
metadata:
annotations:
config.linkerd.io/proxy-cpu-limit: '1'
#...
Copy the code
Unlike Kubernetes CPU limits and requests, which can be expressed as milliCPUs, proxy-CPU-limit annotations should be expressed as CPU kernel integers. Small values are rounded to the nearest whole number.
Use Kubernetes CPU Limits and Requests
Kubernetes provides CPU limits and CPU requests to configure resources allocated to any pod or container. These can also be used to configure THE CPU utilization of the Linkerd agent. However, depending on how Kubelet is configured, using Kubernetes resource limits instead of proxy-CPU-limit annotations may not be ideal.
Kubelet uses one of two mechanisms to enforce pod CPU limits. This is determined by the — CPU-manager-policy kubelet option. Using the default CPU manager policy None, Kubelet uses CFS quotas to enforce CPU limits. This means that the Linux kernel is configured to limit the amount of time that threads belonging to a given process can be scheduled. Alternatively, the CPU manager policy can be set to static. In this case, Kubelet will use Linux Cgroups to enforce CPU limits on containers that meet certain criteria.
When the proxy-CPU-limit annotation configuration’s environment variable is not set, the agent will run as many worker threads as the number of available CPU cores. This means that with the default noneCPU manager policy, agents can generate a large number of worker threads, but the Linux kernel limits how often they can be scheduled. This is less efficient than simply reducing the number of worker threads, as proxy-CPU-limit does: The more time spent on context switches, each worker thread will run less often, which can affect latency.
On the other hand, using cgroup Cpusets limits the number of CPU cores available to the process. Essentially, the agent thinks the system has fewer CPU cores than it actually does. This results in behavior similar to that of proxy-CPU-limit annotation.
However, it is worth noting that certain conditions must be met in order to use this mechanism:
- Kubelet must be configured
static
CPU manager policy - A Pod must belong to a Guaranteed QoS class. This means that all containers in pod must have both memory and CPU limit and Request, and each limit must have the same value as request.
- CPU limit and CPU Request must be integers greater than or equal to 1.
If you are not sure whether all of these criteria will be met, it is best to use proxy-CPU-limit annotations in addition to any Kubernetes CPU limits and requests.
Use the Helm
When using Helm, if the above cgroup-based CPU restrictions are not met, you must set proxy.cores Helm variable and variables other than proxy.cpu.limit.