On July 17, in Cloud Native Days China Cloud Native multi-cluster special session, Huawei Cloud Native open source director Wang Zefeng delivered a keynote speech “Karmada: Cloud Native multi-cloud container Orchestration Platform”, sharing his thinking and practice in Cloud Native multi-cloud multi-cluster.

The following is the full text of the speech

According to a recent survey, more than 93% of enterprises are using multiple cloud vendors at the same time. Cloud native technology and cloud market continue to mature, cloudy, multi-cluster deployment has become the norm, the future will be the era of programmatic cloud management services.

Cloud native multi-cloud multi-cluster typical phase

Phase one: A group of islands

  • Consistent cluster o&M
  • Consistent application delivery
  • The business is separated from each other
  • Data islands, resource islands, and traffic islands

Stage two: Venice Water City

  • Unified Application Delivery (Deployment o&M)
  • Unified Application Access (Traffic Distribution)
  • Unified Resource Allocation (Choreography Scheduling)
  • Small, low-stress cross-cluster business access

Stage 3: The Age of Navigation

Instance, data, traffic:

  • Automatic scheduling
  • Free expansion
  • Freedom of migration

At present, from the perspective of the industry’s products and the progress of secondary development and use of some users, it is still in the transition stage from a group of islands to Venice Water City. Some open source software and products of manufacturers are still doing unified cluster life cycle management and cluster catalog, so as to facilitate the quick choice of switching clusters. In addition, the cluster external traffic through the global traffic distribution and so on. But capabilities like automatic allocation across clusters and application across clusters are missing.

At present, the orchestration of cloud-native multi-cloud multi-cluster business also faces many challenges:

1) Repetitive work in clusters: Operation and maintenance engineers need to deal with complicated cluster configuration, management differences between clusters of different cloud vendors, fragmented API access points and other issues;

2) Maintenance problems of excessively dispersed services: complicated differentiated configuration of applications in various clusters; Service access across the cloud and application synchronization between clusters are difficult to manage.

3) Cluster boundary: the availability of applications is limited by the cluster; Resource scheduling and elastic scaling are limited to clusters.

4) Vendor binding: stickiness of service deployment, lack of automatic fault migration; Lack of neutral open source multi-cloud container orchestration projects.

The past and present life of multi-cluster container orchestration

Karmada: Open source cloud-native multi-cloud container orchestration platform

The above is a panoramic view of Karmada technology in the open source community. Karmada will provide the application of multi-cluster deployment, high availability scheduling, failover, multi-cluster service discovery and traffic governance, multi-cloud cluster lifecycle management and other capability sets in a modular manner, and preset policy sets for a variety of typical user scenarios. So that users can be combined with the actual situation of the enterprise to customize their own multi-cloud platform.

Karmada will focus on providing multi-cluster application management capabilities based on Kubernetes’ native API to help users migrate to a zero-code or even zero-YAML transformation to a multi-cluster architecture. In terms of capabilities, we mainly help users to solve the unified management of the whole network and the whole network cluster. In addition, we will build a typical application deployment model, including two and three centers.

Karmada architecture

Karmada provides REST interfaces to communicate with other components through a separate API Server (Karmada API Server), including Kubernetes native API and Karmada extended API. The Karmada control manager performs operations based on user-created API objects, and the Karmada scheduler implements scheduling applied across multiple clusters.

Karmada core concepts

Resource Template

  • K8s native API definitions, including CRD
  • Multiple cluster applications can be created without modification

Propagation Policy

  • The multi-cluster scheduling policy can be reused

Resource Binding

  • Generic types that drive internal processes

Override Policy

  • Differentiated configuration policies that can be reused across clusters

Work

  • Mapping of the subset group’s final resources at the federation level

Karmada API workflow

Karmada internal workflow

Multi-cluster application deployment

1) Zero retrofit – Deploy a multi-cluster application using the K8s native API

  • Example policy: Configure a multi-AZ HA deployment scheme for all Deployments

  • Use the standard K8s API definition to deploy the application
  • kubectl create -f nginx-deployment.yaml

2) Propagation Policy: Indicates the reusable multi-cluster scheduling policies

resourceSelector

  • Multiple resource types can be associated
  • Object filtering using name or labelSelector is supported

placement

clusterAffinity:

  • Define the target cluster for which scheduling is preferred
  • Supports filtering by Names or LabelSelector

clusterTolerations:

  • Similar to Pod tolerations and Node Taints in a single cluster

spreadConstraints:

  • Define the HA policy for application distribution
  • Clusters can be dynamically grouped by Region, AZ, and label to implement HA at different levels

3) Override Policy: A reusable and differentiated configuration Policy across clusters

resourceSelector

  • Object filtering using name or labelSelector is supported

overriders

  • Multiple override plug-in types are supported
  • PlainTextOverrider: Basic plug-in, plain text operation replacement
  • ImageOverrider: Differentiated configuration plug-in for container images

4) Member Cluster API: a basic unit of resource pools that can be checked by users

syncMode

  • Synchronization with clusters using Push or Pull modes is supported

secretRef

  • Separation of cluster access credentials in Push mode facilitates the opening of the Clusters API for self-service query by users

taints

  • Cluster-level Taint-toleration mechanisms support cluster-level resource reservation and expulsion

kubernetesVersion, apiEnablements

  • The K8s version is a list of apis that are enabled for the cluster and supports SCHEDULING based on API dependencies

resourceSummary

  • Cluster resource information (capacity, Usage, and scheduling)

Karmada neighborhood sign

At present, we have implemented Q1 and Q2 planning features, the latest release of version 0.7 provides multi-cluster east-west service discovery, and multi-cluster external traffic access is currently under development. We keep the release frequency of one version per month, so that users can use it quickly. In Q4, we plan to focus on integrating some of the existing peripheral projects in the industry, and complete the capability development of the overall technology stack this year. \

Attached: Karmada Community technical exchange address

Project Address:

Github.com/karmada-io/…

Slack address:

karmada-io.slack.com