Author | Hao Shuwei (Liu Sheng)

Cloud native technology represented by Kubernetes not only shields the differences in infrastructure between cloud vendors and data centers, but also enables applications to be described and deployed in a standardized way across different clouds. On this basis, we can cheaply manage the Kubernetes cluster in any geographical location. This article will show you how to achieve a consistent experience of cluster management and security governance for public cloud ACK clusters and data center self-built Kubernetes clusters.

ACK registers the cluster security architecture

In order to achieve the cluster management and security governance of the consistent experience of the public cloud ACK cluster and the self-built Kubernetes cluster of the data center, it is necessary to unify them to the same control plane. ACK registered cluster allows self-built Kubernetes cluster in any geographical location to access aliyun Container Service management system through public network or private network (cloud on cloud off cloud network through) connection endpoint. Here is a schematic diagram of the ACK registered cluster:

In the SCHEMATIC diagram of ACK registered cluster architecture, it mainly includes the following components:

  • ACK container services console.

  • ACK to register cluster Agent components: Agent components are deployed in a self-built Kubernetes cluster (or other cloud vendor’s container cluster) in the form of Deployment. Receives requests from the ACK Register cluster Stub component (using the ACK container Services console or ACK register cluster Kubeconfig) and forwards them to the target cluster’s Kubernetes API Server, The response from the Kubernetes API Server is received and sent back to the Stub component.

  • ACK registers the cluster Stub component: On the container service management and control side deployed by the Stub component, each registered cluster has a Stub component, which is used to forward the request generated by ACK container service console or ACK registered cluster Kubeconfig to the cluster, forward the request to the Agent and receive the response from the Agent. The response is eventually returned to the client.

  • Kubernetes API Server: Kubernetes API Server that targets self-built Kubernetes clusters or other cloud vendor container clusters.

One way registration two way communication

As mentioned earlier, ACK registered clusters can access self-built Kubernetes clusters located in any geographical location. One characteristic of self-built Kubernetes clusters in data centers is that these self-built clusters are usually in a limited private network environment, and can only be clustered out of the public network to access the external environment. ACK Registration cluster To solve this problem, the Stub/Agent component is designed to be registered with the Stub component in a one-way manner. When the Agent connects to the Stub component, the Agent carries the pre-generated token and certificate information for verification. The whole communication link uses TLS protocol to ensure data encryption.

Non-managed secure access mechanism

Through ACK registered cluster will build Kubernetes cluster access to Ali cloud container service management and control system, the biggest security concerns of users is the management and control of their own cluster access, we through the following points to ensure that users of their own cluster absolute security control.

  • The ACK management side does not store any key information of the user – owned cluster. User built Kubernetes cluster has its own set of certificate system, if the ACK registered cluster use user built Kubernetes cluster KubeconFig access to it, then is bound to cause user cluster access is not controllable. In fact, from the perspective of security and consistency experience on the management and control side, we are required to mask the difference between the management and control side and the user-built cluster certificate system by registering clusters through ACK. The specific solution is that the management control side uses the certificate system issued by ACK to access the Stub component of the registered cluster. After the Stub and Agent complete the authentication request, the Agent performs layer 7 proxy forwarding to the target API Server. Finally, RBAC authentication and audit of the request is completed on API Server, as shown in the figure below.

  • Cluster access permission management converges to the Agent component. When the Agent is deployed in a user-created cluster, the access permission of the ACK management side to the user-created cluster is converged on the Agent side through the Stub/Agent link. In this way, users can fully control the access permission of the user-created cluster.

  • The Agent component is deployed in non-invasive mode. Agent component is deployed in self-built Kubernetes cluster in the form of Deployment, without any change or operation of self-built cluster, the source code of Agent component will be open source later.

  • Enable security audit. You can enable the security audit function in a registered cluster to query and audit all operations performed on the cluster.

Cluster management with consistent experience

Given that user A has created an ACK cluster in the public cloud and A self-built Kubernetes cluster in the data center, how can A consistent experience be used to manage the two Kubernetes clusters in different cloud environments? It is simple to create an ACK registry and connect to a self-built cluster.

Create an ACK registry cluster

We only need to select the area closest to the self-built Kubernetes cluster and configure the VPC network and security group on the page of creating a registered cluster on the ACK Container Services Console. The registered cluster can be created in 3 minutes, as shown in the figure below.

On the cluster details page, you can see that there is a cluster import agent configuration for public network access and private network access self-built Kubernetes cluster respectively in the connection information, as shown in the figure below:

Access the self-built Kubernetes cluster

Deploy the cluster import agent configuration above in a self-built Kubernetes cluster:

$ kubectl apply -f agent.yaml
Copy the code

After the Agent component runs normally, we can view the cluster list in the ACK container service console, as shown in the figure below. The cluster named ACK is the ACK managed version cluster, and Kubernete version is 1.20.4-Aliyun. 1. The cluster named IDC-K8S is an ACK registered cluster, connected to the user-built Kubernetes cluster, Kubernetes version 1.19.4.

Registered cluster IDC-K8S can be used to manage self-built Kubernetes cluster, cluster overview information and node list information as shown in the figure below.

Then, users can use the ACK container service console to perform cluster management, node management, application management, and o&M operations on the on-cloud and off-cloud clusters using a consistent experience.

Security governance for consistent experience

When using Kubernetes clusters on different cloud platforms, different cloud platforms have different security governance capabilities and security policy configuration and management methods. Such uneven security governance capabilities will lead to the operation and maintenance team being familiar with the security management mechanism of each cloud platform when defining user roles and access rights. If administrative and secure access control capabilities are inadequate, problems such as role violations and access management risks are very likely.

For example, in a scenario where various projects are using Kubernetes container clusters and container clusters belong to different cloud platforms, administrators need to be able to direct all users and their activities to the corresponding container clusters so that they know who did what and when, You may have multiple accounts that require different levels of access, or as more and more people join and leave, changing teams and projects, managing the permissions of these users becomes increasingly complex.

The ACK registered cluster provides the self-built Kubernetes cluster with consistent experience of security governance capability in the following aspects.

Manage cluster access control using Aliyun master and sub-account authentication system and Kubernetes RBAC authentication system

If you have two users with different job responsibilities, developer testuser01 and tester testuser02, you can create subaccounts testuser01 and testUser02 for developer and tester. The following permissions are assigned to ACK cluster and IDC-K8S cluster according to the different responsibilities of development testers:

  • Developer testuser01 grants read and write permissions to all namespaces in the ACK cluster and to the test namespace in the IDC-K8S cluster.

  • The tester, testUser02, grants read and write permissions only to the test namespace of the IDC-K8S cluster.

Use the master account to authorize developer testuser01 and tester testuser02. Select the corresponding testUser01 and testUser02 sub-accounts in authorization management of ACK Container Service console. The authorization configuration is as follows:

After you have authorized testUser01 and testUser02 according to the wizard, log in to the container services console using subaccount testuser01 to test that testUser01 has read and write permissions on all namespaces in the ACK cluster. Only the read and write permission is granted to the TEST namespace of the IDC-K8S cluster.

You can log in to the container services console using subaccount testUser02 to test that testUser02 cannot see the ACK cluster and has read and write permissions only on the TEST namespace of the IDC-K8S cluster.

Cluster audit

In Kubernetes cluster, API Server audit log can help the cluster manager to record or trace the daily operations of different users, is an important link in the cluster security operation and maintenance. You can use the cluster audit function to visually trace the daily operations of different users in a registered cluster.

Here is an example of log auditing for a self-built Kubernetes cluster.

Configuration inspection

The inspection function can be configured to scan Workload configurations for security risks, provide inspection details and reports, analyze and interpret the inspection results, and help users know whether the configurations of applications running in the current state have security risks.

The following is an example of self-built Kubernetes cluster inspection details.

Author’s brief introduction

Shuwei Hao (Liu Sheng), ali Cloud container service technical expert, core member of cloud native distributed cloud team, focusing on cloud native multi-cluster unified management and scheduling, hybrid cluster, application delivery and migration and other cloud native technologies.

Click on the link below to view the video interpretation related to www.bilibili.com/video/BV1WU…