Linkerd 2.10 series

  • Linkerd v2.10 Service Mesh
  • Tencent Cloud K8S deployment Service Mesh — Linkerd2 & Traefik2 deployment emojivoto application
  • Learn about the basic features of Linkerd 2.10 and step into the era of Service Mesh
  • Add your service to Linkerd
  • Automated Canary release
  • Automatic rotation controls plane TLS and Webhook TLS credentials
  • How do I configure an external Prometheus instance
  • Configuring proxy Concurrency
  • Configure to retry
  • Configured timeout
  • Control plane debug endpoint
  • Use Kustomize to customize Linkerd configuration
  • Use Linkerd for distributed tracing
  • Debugging 502 s
  • Use each routing metric to debug the HTTP application
  • Debug gRPC applications using request tracing
  • Export index
  • Exposure to Dashboard
  • Generate your own mTLS root certificate
  • Obtain indicators of each route
  • Injection failures in chaos engineering
  • Graceful Pod off
  • Ingress traffic
  • Install multiple cluster components
  • Install Linkerd
  • Install Linkerd using Helm
  • Linkerd and Pod Security Policy (PSP)
  • Manually rotate control plane TLS credentials
  • Change the agent log level
  • Multicluster communication
  • Use GitOps in conjunction with Linkerd and Argo CDS

Linkerd 2.10 中文 版

  • linkerd.hacker-linner.com

Debugging the Service Mesh can be difficult. When something doesn’t work, is there a problem with the proxy? And the application? With the client? With the underlying network? Sometimes nothing is better than looking at raw network data.

If you need network-level visibility of packets entering and leaving your application, Linkerd provides a Debug sidecar with some useful tools. IO /enable-debug-sidecar = config.linkerd. IO /enable-debug-sidecar = config.linkerd. IO /enable-debug-sidecar The “True” annotation adds debug sidecar to pod. For convenience, the Linkerd inject command provides a — enable-debugg-sidecar option to make this annotation for you.

(Note that the set of containers in Kubernetes Pods is not mutable, so simply adding this annotation to a pre-existing pod won’t work. It must exist when the pod is created.

Debug Sidecar mirrors include Tshark, tcpdump, LSof, and iproute2. Once installed, it starts automatically logging all incoming and outgoing traffic using Tshark, which can then be viewed using Kubectl logs. Alternatively, you can use Kubectl exec to access the container and run commands directly.

For example, if you have read the Linkerd Starter guide and installed the Emojivoto application and want to debug traffic for the Voting service, you can run:

kubectl -n emojivoto get deploy/voting -o yaml \
  | linkerd inject --enable-debug-sidecar - \
  | kubectl apply -f -
Copy the code

Deploy the Debug Sidecar container to all pods in the Voting service. (Note that there is only one Pod in this deployment, which will be recreated to perform this operation – see the note on Pod variability above.)

You can confirm that the debug container is running by listing all containers in the POD with the voting- SVC tag:

kubectl get pods -n emojivoto -l app=voting-svc \
  -o jsonpath='{.items[*].spec.containers[*].name}'
Copy the code

You can then see the live TSHARK output in the log by simply running it:

kubectl -n emojivoto logs deploy/voting linkerd-debug -f
Copy the code

If that’s not enough, you can exec into the container and run your own commands in the network context. For example, if you wanted to check HTTP headers for a request, you could run the following code:

kubectl -n emojivoto exec -it \
  $(kubectl -n emojivoto get pod -l app=voting-svc \
    -o jsonpath='{.items[0].metadata.name}') \
  -c linkerd-debug -- tshark -i any -f "tcp" -V -Y "http.request"
Copy the code

The debug sidecar error message that is valid for troubleshooting is Connection Refused.

ERR! [<time>] proxy={server=in listen=0.0.0.0:4143 remote=some. SVC :50416} Linkerd2_proxy ::app::errors unexpected error: Error trying to connect: Connection refused (OS error 111) (address: 127.0.0.1:8080)Copy the code

In this case, the tshark command can be modified to listen for traffic between the specific ports mentioned in the error, as follows:

kubectl -n emojivoto exec -it \
 $(kubectl -n emojivoto get pod -l app=voting-svc \
  -o jsonpath='{.items[0].metadata.name}') \
  -c linkerd-debug -- tshark -i any -f "tcp" -V \
  -Y "(tcp.srcport == 4143 and tcp.dstport == 50416) or tcp.port == 8080"
Copy the code

Note that the message Connection reset by peer has a similar error. If you do not see the associated error or message in the application log output, the error is usually benign. In this case, the debug container may not be able to help resolve the error message.

ERR! [<time>] proxy={server=in listen=0.0.0.0:4143 remote=some. SVC :35314} Linkerd2_proxy ::app::errors unexpected error: connection error: Connection reset by peer (os error 104)Copy the code

Of course, these examples only work if you can exec any container in the Kubernetes cluster. For an alternative to this method, see Linkerd Tap.