Linkerd 2.10 series

  • Linkerd V2 Service Mesh
  • Tencent Cloud K8S deployment Service Mesh — Linkerd2 & Traefik2 deployment emojivoto application
  • Learn about the basic features of Linkerd 2.10 and step into the era of Service Mesh
  • Linkerd 2.10(Step by Step) — 1. Add your service to Linkerd
  • Linkerd 2.10(Step by Step) — 2. Automated Canary publishing
  • Linkerd 2.10(Step by Step) — 3. Automatic rotation control plane TLS and Webhook TLS credentials

Linkerd 2.10 中文 版

  • linkerd.hacker-linner.com

Although the Linkerd-Viz extension carries its own instance of Prometheus, in some cases it makes more sense to use an external instance for a variety of reasons.

Note that this approach requires you to manually add and maintain additional fetching configurations in the Prometheus configuration.

This tutorial shows how to configure an external Prometheus instance to capture control plane and proxy metrics in a format available to both users and Linkerd control plane components such as the Web.

There are two important points to address here.

  • Configure the externalPrometheusInstance to obtainLinkerdIndicators.
  • configurationlinkerd-vizExtend to use thisPrometheus.

Prometheus captures configurations

The following fetching configuration must be applied to external Prometheus instances.

The following fetching configuration is a subset of the Linkerd-Prometheus fetching configuration.

Before applying it, it is important to replace the template values (present in {{}}) with direct values for the following configuration to work properly.

    - job_name: 'linkerd-controller'
      kubernetes_sd_configs:
      - role: pod
        namespaces:
          names:
          - '{{.Values.linkerdNamespace}}'
          - '{{.Values.namespace}}'
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_pod_container_port_name
        action: keep
        regex: admin-http
      - source_labels: [__meta_kubernetes_pod_container_name]
        action: replace
        target_label: component

    - job_name: 'linkerd-service-mirror'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_pod_label_linkerd_io_control_plane_component
        - __meta_kubernetes_pod_container_port_name
        action: keep
        regex: linkerd-service-mirror; admin-http$
      - source_labels: [__meta_kubernetes_pod_container_name]
        action: replace
        target_label: component

    - job_name: 'linkerd-proxy'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_pod_container_name
        - __meta_kubernetes_pod_container_port_name
        - __meta_kubernetes_pod_label_linkerd_io_control_plane_ns
        action: keep
        regex: ^{{default .Values.proxyContainerName "linkerd-proxy" .Values.proxyContainerName}}; linkerd-admin; {{.Values.linkerdNamespace}}$
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      # special case k8s' "job" label, to not interfere with prometheus' "job"
      # label
      # __meta_kubernetes_pod_label_linkerd_io_proxy_job=foo =>
      # k8s_job=foo
      - source_labels: [__meta_kubernetes_pod_label_linkerd_io_proxy_job]
        action: replace
        target_label: k8s_job
      # drop __meta_kubernetes_pod_label_linkerd_io_proxy_job
      - action: labeldrop
        regex: __meta_kubernetes_pod_label_linkerd_io_proxy_job
      # __meta_kubernetes_pod_label_linkerd_io_proxy_deployment=foo =>
      # deployment=foo
      - action: labelmap
        regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
      # drop all labels that we just made copies of in the previous labelmap
      - action: labeldrop
        regex: __meta_kubernetes_pod_label_linkerd_io_proxy_(.+)
      # __meta_kubernetes_pod_label_linkerd_io_foo=bar =>
      # foo=bar
      - action: labelmap
        regex: __meta_kubernetes_pod_label_linkerd_io_(.+)
      # Copy all pod labels to tmp labels
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
        replacement: __tmp_pod_label_$1
      # Take `linkerd_io_` prefixed labels and copy them without the prefix
      - action: labelmap
        regex: __tmp_pod_label_linkerd_io_(.+)
        replacement:  __tmp_pod_label_$1
      # Drop the `linkerd_io_` originals
      - action: labeldrop
        regex: __tmp_pod_label_linkerd_io_(.+)
      # Copy tmp labels into real labels
      - action: labelmap
        regex: __tmp_pod_label_(.+)
Copy the code

The operating configuration of built-in Prometheus can be used as a reference.

kubectl -n linkerd-viz  get configmap prometheus-config -o yaml
Copy the code

Linkerd-viz extension configuration

Linkerd’s visual extension components, such as metrics-API, rely on Prometheus instances to support dashboards and CLI.

The prometheusUrl field provides you with a location where all of these components can be configured as external Prometheus urls. This can be done through the CLI and Helm.

CLI

This can be done by passing the file with the above fields to the VALUES flag, which is available through the Linkerd viz install command.

prometheusUrl: existing-prometheus.xyz:9090
Copy the code

Once applied, this configuration is not persistent in installation. During reinstallation, upgrade, and so on, the user must pass the same information again.

When external Prometheus is used and the prometheusUrl field is configured, Linkerd’s Prometheus is still included in the installation. If you want to disable it, be sure to include the following configuration as well:

prometheus:
  enabled: false
Copy the code

Helm

When using Helm, you can apply the same configuration through values.yaml. After application, Helm ensures that the configuration remains the same during the upgrade.