Linkerd 2.10 series

  • Linkerd v2.10 Service Mesh
  • Tencent Cloud K8S deployment Service Mesh — Linkerd2 & Traefik2 deployment emojivoto application
  • Learn about the basic features of Linkerd 2.10 and step into the era of Service Mesh
  • Linkerd 2.10 – Add your service to Linkerd
  • Linkerd 2.10 — Automated Canary release
  • Linkerd 2.10 — Automatic rotation controls plane TLS and Webhook TLS credentials
  • Linkerd 2.10 — How do I configure external Prometheus instances
  • Linkerd 2.10 – Configure proxy concurrency
  • Linkerd 2.10 – Configure retry
  • Linkerd 2.10 — Configure timeout
  • Linkerd 2.10 – Controls the plane debug endpoint
  • Linkerd 2.10 – Use Kustomize to customize Linkerd configuration
  • Linkerd 2.10 — Use Linkerd for distributed tracing
  • Linkerd 2.10 — Debug 502S
  • Linkerd 2.10 – Debug HTTP applications using each routing metric
  • Linkerd 2.10 – Debug gRPC applications using request tracing
  • Linkerd 2.10 — Export metrics
  • Linkerd 2.10 — Expose Dashboard
  • Linkerd 2.10 – Generate your own mTLS root certificate
  • Linkerd 2.10 – Gets metrics for each route
  • Linkerd 2.10 — Injection failures for chaos engineering
  • Linkerd 2.10 — Elegant Pod shutdown
  • 2.10 – Linkerd Ingress traffic

Linkerd 2.10 中文 版

  • linkerd.hacker-linner.com

Multi-cluster support in Linkerd requires additional installation and configuration on top of the default control plane installation. This guide covers this installation and configuration and the common problems you may encounter.

requirements

  • Two clusters.
  • The control plane installation in each cluster shares a common trust anchor.
  • Each of these clusters should be configured askubectl contexts.
  • Promotion permissions on both clusters. We will create the service account and grant extension permissions, so you need to be able to do this on the test cluster.
  • supporteastIn the clusterLoadBalancerType of service. View the documentation for the Cluster provider or viewinlets. This is awestThe cluster will be used to communicate with the gatewayeastThe content of the communication.

Step 1: Install the multi-cluster control plane

On each cluster, run:

linkerd multicluster install | \
    kubectl apply -f -
Copy the code

To verify that everything started successfully, run:

linkerd multicluster check
Copy the code

Step 2: Link cluster

Each cluster must be linked. This involves installing multiple resources in the source cluster, There is a Secret containing KubeconFig that allows access to the target cluster Kubernetes API, a Service Mirror control for Mirroring services, and a Link custom resources used to save configurations. To connect cluster West to cluster East, you run:

linkerd --context=east multicluster link --cluster-name east |
  kubectl --context=west apply -f -
Copy the code

To verify that the credentials have been created successfully and that the clusters can access each other, run:

linkerd --context=west multicluster check
Copy the code

You should also see the list of gateways by running the following command. Note that you need to install Linkerd’s Viz extension in the source cluster to get the list of gateways:

linkerd --context=west multicluster gateways
Copy the code

For detailed instructions on this step, see the Linked Clusters section.

Step 2: Expose services

Services are not automatically mirrored in linked clusters. By default, only with mirror. The mirror linkerd. IO/exported label service. For each service that you want to mirror to a linked cluster, run:

kubectl label svc foobar mirror.linkerd.io/exported=true
Copy the code

You can configure different label selectors (different) by using the –selector flag on the Linkerd Multicluster link command or by editing link resources created by the Linkerd Multicluster link command Label the selector).

Using the Ambassador

No need for a bundled Linkerd Gateway. In fact, if you have an existing Ambassador installation, it’s easy to use! By using an existing Ambassador installation, you don’t have to manage multiple ingress gateways and pay for additional cloud load balancers. This guide assumes that Ambassador is installed in the Ambassador namespace.

First, you need to inject Ambassador Deployment using Linkerd:

Kubectl -n ambassador get deploy ambassador - o yaml | \ linkerd inject \ - skip the inbound - 80443 \ ports --require-identity-on-inbound-ports 4183 - | \ kubectl apply -f -Copy the code

This will add the Linkerd proxy, skipping the Ambassador port for public traffic handling and requiring identity on the gateway port. Next, you need to add some configuration so Ambassador knows how to handle requests:

cat <<EOF | kubectl --context=${ctx} apply -f - --- apiVersion: getambassador.io/v2 kind: Module metadata: name: ambassador namespace: ambassador spec: config: add_linkerd_headers: true --- apiVersion: getambassador.io/v2 kind: Host metadata: name: wildcard namespace: ambassador spec: hostname: "*" selector: matchLabels: nothing: nothing acmeProvider: authority: none requestPolicy: insecure: action: Route --- apiVersion: getambassador.io/v2 kind: Mapping metadata: name: public-health-check namespace: ambassador spec: prefix: /-/ambassador/ready rewrite: /ambassador/v0/check_ready service: localhost:8877 bypass_auth: true EOF
Copy the code

The Ambassador Service and Deployment definitions need a bit of tinkering. This adds the metadata required by the Service Mirror Controller. To patch these resources, run:

kubectl --context=${ctx} -n ambassador patch deploy ambassador -p=' spec: template: metadata: annotations: config.linkerd.io/enable-gateway: "true" '
kubectl --context=${ctx} -n ambassador patch svc ambassador --type='json' -p='[ {"op":"add","path":"/spec/ports/-", "value":{"name": "mc-gateway", "port": 4143}}, {"op":"replace","path":"/spec/ports/0", "value":{"name": "mc-probe", "port": 80, "targetPort": 8080}} ]'
kubectl --context=${ctx} -n ambassador patch svc ambassador -p=' metadata: annotations: mirror.linkerd.io/gateway-identity: ambassador.ambassador.serviceaccount.identity.linkerd.cluster.local mirror.linkerd.io/multicluster-gateway: "true" mirror.linkerd.io/probe-path: /-/ambassador/ready mirror.linkerd.io/probe-period: "3" '
Copy the code

Now you can install the Linkerd multi-cluster component on your target cluster. Since we are using Ambassador as our gateway, we need to use the –gateway=false flag to skip installing the Linkerd gateway:

linkerd --context=${ctx} multicluster install --gateway=false | kubectl --context=${ctx} apply -f -
Copy the code

With all the setup and configuration done, you are ready to link the source cluster to the Ambassador Gateway. Run the link command to specify the name and namespace of the Ambassador Service:

linkerd --context=${ctx} multicluster link --cluster-name=${ctx} --gateway-name=ambassador --gateway-namespace=ambassador \
    | kubectl --context=${src_ctx} apply -f -
Copy the code

From the source cluster (the cluster that is not running Ambassador), you can verify that everything is working properly by running the following command:

linkerd multicluster check
Copy the code

In addition, when listing active gateways, the Ambassador Gateway displays:

linkerd multicluster gateways
Copy the code

Trust anchor binding

To secure connections between clusters, Linkerd needs to have a shared trust anchor. This allows the control plane to encrypt requests passed between clusters and verify the identity of those requests. This identity is used to control access to the cluster, so sharing the trust anchor is critical.

The simplest way to do this is to share a trust anchor certificate across multiple clusters. If you have an existing Linkerd installed and discarded the trust anchor key, you may not be able to provide a single certificate for the trust anchor. Fortunately, trust anchors can also be a bunch of certificates!

To get a trust anchor for an existing cluster, run:

kubectl -n linkerd get cm linkerd-config -ojsonpath="{.data.values}" | \
  yq e .identityTrustAnchorsPEM - > trustAnchor.crt
Copy the code

This command requires yq. If you do not have YQ, feel free to use the tool of your choice to extract the certificate from the identityTrustAnchorsPEM field. `

Now you need to create a new Trust Anchor and issuer for the new cluster:

step certificate create root.linkerd.cluster.local root.crt root.key \
   --profile root-ca --no-password --insecure
step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \
  --profile intermediate-ca --not-after 8760h --no-password --insecure \
  --ca root.crt --ca-key root.key
Copy the code

We use step CLI to generate the certificate. Openssl also works!

Using the trust anchors of the old cluster and the trust anchors of the new cluster, you can create bundles by running the following command:

cat trustAnchor.crt root.crt > bundle.crt
Copy the code

You need to upgrade an existing cluster with a new bundle. Make sure that each pod that you want to communicate with the new cluster is restarted so that it can use this package. To upgrade an existing cluster with this new Trust Anchor bundle, run:

linkerd upgrade --identity-trust-anchors-file=./bundle.crt | \
    kubectl apply -f -
Copy the code

Finally, you will be able to install Linkerd on your new cluster using the Trust Anchor bundle you just created along with the Issuer certificate and key.

linkerd install \
  --identity-trust-anchors-file bundle.crt \
  --identity-issuer-certificate-file issuer.crt \
  --identity-issuer-key-file issuer.key | \
  kubectl apply -f -
Copy the code

Be sure to verify that the cluster started successfully by running check on each cluster.

linkerd check
Copy the code

Install multi-cluster control plane components using the Helm

Linkerd’s multi-cluster components, namely Gateway and Service Mirror, can be installed by Helm instead of Linkerd Multicluster install.

This not only allows advanced configuration, but also allows users to bundle multiple cluster installations as part of their existing Helm-based installation pipeline.

Add Linkerd’s Helm repository

First, let’s add Linkerd’s Helm repo by running the following command

# To add the repo for Linkerd2 stable releases:
helm repo add linkerd https://helm.linkerd.io/stable
Copy the code

Helm Multi-cluster installation process

helm install linkerd2-multicluster linkerd/linkerd2-multicluster
Copy the code

Chart values will be selected from the chart values.yaml file.

You can override values in the file by providing your own values.yaml file and using the -f option, or you can override specific values with the –set flag series.

The full set of configuration options can be found here

You can verify the installation by running the following command

linkerd multicluster check
Copy the code

Gateway installation can be disabled by gateway Settings. By default, this value is true.

Install additional access credentials

When a multi-cluster component is installed on a target cluster using Linkerd Multicluster Install, a service account is created that will be used by the source cluster to mirror services. Using a different service account for each source cluster can be beneficial because it enables you to undo service image access from a particular source cluster. Additional service accounts and associated RBAC can be generated through the CLI using the Linkerd Multicluster allow command.

Can also through the Helm remoteMirrorServiceAccountName value is set to the list to complete the same function.

 helm install linkerd2-mc-source linkerd/linkerd2-multicluster --set remoteMirrorServiceAccountName={source1\,source2\,source3} --kube-context target
Copy the code
I am weishao wechat: uuhells123 public number: hackers afternoon tea add my wechat (mutual learning exchange), pay attention to the public number (for more learning materials ~)Copy the code