The authorization policy

Linkerd’s new Server Authorization Policy feature gives you fine-grained control over which services are allowed to communicate with each other. These policies are directly based on the secure service identity provided by Linkerd’s automatic mTLS function. In keeping with Linkerd’s design principles, authorization policies are expressed in a composable Kubernetes native way, which can express a wide range of behaviors with minimal configuration.

  • Linkerd. IO / 2.11 / featur…

To this end, Linkerd 2.11 introduces a set of default authorization policies that can be applied to the Cluster, Namespace, or POD level simply by setting Kubernetes annotations, including:

  • all-authenticated(From only allowedmTLS-validatedRequest for service);
  • all-unauthenticated(All requests are allowed)
  • deny(Reject all requests)
  • … And more.

Linkerd 2.11 also introduces two new CRD Servers and ServerAuthorization, which together allow fine-grained policies to be applied in any set of pods. For example, Server can select all admin ports on all pods in a namespace, ServerAuthorization can allow health check connections from Kubelet, Or mTLS connections for metrics Collection.

Together with CRD, these annotations allow you to easily specify various policies for clustering, from “allow all traffic” to “Port 8080 on service Foo can only receive mTLS traffic from services that use a Bar service account,” and more. (See the full policy documentation »)

  • Linkerd. IO / 2.11 / featur…

Retry the HTTP request with the body

Retrying failed requests is a key part of Linkerd’s ability to improve the reliability of Kubernetes applications. So far, Linkerd has only allowed retries of no-body requests, such as HTTP GET, for performance reasons. In 2.11, Linkerd can also retry failed requests with a body, including gRPC requests, with a maximum body size of 64KB.

The container starts the sort solution

Linkerd 2.11 now ensures by default that the Linkerd2-proxy container is ready before any other container in the POD is initialized. This is a workaround for Kubernetes’ lack of control over the container startup order, and solves a big class of tricky race conditions where application containers try to connect before agents are ready.

Smaller, faster and lighter

As usual, Linkerd 2.11 continues to make Linkerd Kubernetes’ lightest and fastest service grid. The relevant changes in 2.11 include:

  • Control plane (control plane) reduced to only3A deployment.
  • Due to high activityRustThe network ecosystem,LinkerdData plane (data plane) micro-proxySmaller, faster.
  • SMIMost of the functionality has been removed from the core control plane and moved to extensions.
  • LinkerdMirror now uses minimal"Distroless"Base mirroring.

There’s more!

Linkerd 2.11 also includes a number of other improvements, performance enhancements, and bug fixes, including:

  • KubernetesResources of the newCLI TAB Completion.
  • Can now be found inNamespaceSet all on the resourceconfig.linkerd.io annotations, they will be used as those created in the namespacepodThe default value of.
  • alinkerd check -o shortA new command with short output.
  • DashboardIn the newExtensions (Extensions)page
  • Fuzziness tests for agents (Fuzz testing)!
    • Linkerd. IO / 2021/05/07 /…
  • The agent is now set informationall5d-client-idl5d-proxy-errorThe header.
  • rightHelmConfigurability andlinkerd checkA lot of improvements have been made.
  • uselinkerd-multiclusterStatefulSetsExperimental support of
  • There’s more!

See the complete release notes for more information

  • Github.com/linkerd/lin…