What are the hottest server-side technologies today? The answer is probably cloud native! KubeSphere, as a cloud native distributed operating system based on Kubernetes kernel, is also a part of the cloud native boom in full swing. KubeSphere continues its commitment to 100% open source and is rapidly going global thanks to the power of the open source community.

On November 3, 2021, the KubeSphere open Source community is excited to announce the official release of KubeSphere 3.2.0.

Six months ago, KubeSphere 3.1.0 exploded with “edge computing”, “metering” and other features, extending Kubernetes from the cloud to the edge, further improving the interaction design and improving the user experience. Three months ago, KubeSphere released V3.1.1, which allows you to specify Prometheus in the Kubernetes cluster when deploying KubeSphere.

Today, KubeSphere 3.2.0 brings even more anticipated features, including new support for “GPU resource scheduling management” and GPU usage monitoring to further enhance the experience in cloud native AI scenarios. In addition, features such as “multi-cluster management, multi-tenant management, observability, DevOps, App Store, and micro-service governance” have been enhanced to further improve the interaction design and overall improve user experience.

In addition, V3.2.0 has received contributions and participation from more enterprises and users outside Qingyun technology, whether it is function development, function testing, defect reporting, requirement recommendations, enterprise best practices, Bug fixes, international translation, document contributions, These contributions from the open source community have helped greatly in the launch and promotion of V3.2.0, and we will give special thanks at the end of this article.

Interpret the KubeSphere 3.2.0 major update

GPU scheduling and quota management

At present, with the rapid development of artificial intelligence machine learning and other technologies, more and more AI companies have emerged in the market with demands for GPU resource scheduling management in server clusters. Among them, monitoring GPU usage and GPU resource quota management are highly popular in the community. We have received a lot of gPU-related requests on KubeSphere Chinese forums. KubeSphere itself has always supported gpus, and now v3.2.0 will make GPU management easier to use.

KubeSphere 3.2.0 supports visual creation of GPU workloads, scheduling of GPU resources, tenant level quota management of GPU resources, and interconnection with Nvidia Gpus and Vgpus.

Enhanced observability

As container and microservice technologies become more popular, the call relationships between systems will become more complex and the number of processes running in the system will explode. There are thousands of processes running in a distributed system, and it is difficult to track the dependency and call path between these processes using traditional monitoring technology, so the observability within the system becomes particularly important.

Observability is the ability to measure the internal state of a system by detecting its output. A system is said to be “observable” if its current state can only be estimated from the output information, known as telemetry data. Observability includes Logging, Tracing, and Metrics. Data collected through these three Metrics are collectively referred to as telemetry data.

  1. More powerful custom monitoring panel

KubeSphere has added custom monitoring at the cluster level since V3.1.0. You can select the default template, upload template, or custom template to generate custom monitoring panels. The default template for KubeSphere 3.2.0 adds support for Grafana. You can import the Grafana monitor panel by specifying the URL of the monitor panel or uploading a Grafana MONITOR panel JSON file. KubeSphere will automatically convert the Grafana monitor panel to KubeSphere’s monitor panel.

The SYSTEM also provides a default monitoring template for GPU resources and provides default indicators, reducing the configuration cost of creating customized templates and compiling YAML.

  1. Alarm notification and logs
  • Supports communication with Elasticsearch using HTTPS.

  • After KubeSphere 3.1 supports multiple notification channels such as email, Nail, enterprise wechat, Webhook and Slack, 3.2.0 added support to test and verify the configuration of alarm notification channels.

  1. The ETCD monitoring panel can be automatically configured for the ETCD LeaderLeaderThe label.

Multi-cloud and multi-cluster management

As Kubernetes becomes more and more widely used in enterprises, CNCF’s user survey in 2020 shows that nearly 80% of users are running two or more Kubernetes clusters in production environments. KubeSphere aims to solve the challenges of multi-cluster and multi-cloud management by providing users with a unified control plane to distribute applications and their replicas to multiple clusters across public clouds and local environments. KubeSphere also has cross-cluster observability, including monitoring, logs, events, and audit logs across multiple cluster dimensions.

KubeSphere 3.2.0 takes the cross-cluster scheduling level one step further. When creating federatedDeployment across clusters, KubeSphere not only supports scheduling business to multiple clusters with different number of replicas, but also specifies the total number of replicas to be distributed across multiple clusters on its details page. And specify any weight to distribute copies of the business to multiple clusters. This can be useful when users want to have the flexibility to scale out their deployment and distribute their multiple copies to multiple clusters in varying proportions.

Operation and maintenance friendly storage management

Persistent storage is the ability to run Kubernetes in the production environment. Stable and reliable storage escorts the core data of enterprises. The Console interface of KubeSphere 3.2.0 adds the storage volume management function. The administrator can configure whether to allow users to clone, snapshot, and expand a storage volume under the Storage type (StorageClass) to facilitate the operation and maintenance of persistent storage for staid applications.

By default, the Immediate mode is not conducive to topologically constrained storage backends and may cause Pod scheduling failures. V3.2.0 added a delayed binding (WaitForFirstConsumer) pattern, which guarantees that PVCS and PVS are not bound until pods are scheduled so that they can be properly scheduled based on requests such as Pod resources.

Previously, KubeSphere Console only managed storage volumes (PVCS), not storage instance (PV) resources. This feature was implemented in KubeSphere 3.2.0, which now allows users to view, edit and delete PV information on the Console interface.

When creating a snapshot for a storage volume, you can specify the snapshot type, that is, VolumeSnapshotClass. In this way, you can specify the storage backend to create a snapshot.

Cluster-level gateways are supported

In KubeSphere 3.1, only project-level gateways are supported, and if a user has too many projects, resources will be wasted. And gateways in different enterprise Spaces are independent of each other.

KubeSphere 3.2.0 began to support global gateways at the cluster level. All projects can share the same gateway, and previously created project gateways are not affected by the cluster gateway.

All project gateways can be centrally managed and configured. Administrators do not need to switch to different enterprise Spaces to configure gateways. Because there are many Ingress Controllers that can be used as gateway solutions in K8s ecosystem, after KubeSphere 3.2.0 restructures the gateway back end, Now any ingress Controller that supports V1 \ Ingress can be used as a gateway solution to flexibly connect to KubeSphere.

Authentication and Authorization

Unified identity management and complete authentication system are indispensable capabilities for logical isolation in multi-tenant systems. In addition to the ability to Connect to AD/LDAP, OAuth2 and other authentication systems, KubeSphere 3.2.0 also has an OpenID Connect-based authentication service built in to provide authentication capabilities for other components. OpenID Connect is a user authentication protocol based on the OAuth 2.0 specification. It is simple enough, but it also provides a large number of features and security options to meet enterprise business needs.

An app store for partners

The app store and app lifecycle management are unique features of KubeSphere. KubeSphere implements these two features based on OpenPitrix, which is developed and open source.

KubeSphere 3.2.0 added the function of “dynamically loading app Store”. Partners can apply for integrating the Helm Chart of the application into KubeSphere App Store. After the relevant Pull Request is merged, The KubeSphere App Store loads apps dynamically, no longer restricted by the KubeSphere version. The built-in Chart address of KubeSphere app store is github.com/kubesphere/… For example, Nocalhost and Chaos Mesh have already integrated Helm Chart into KubeSphere 3.2.0 in this way, making it easy for users to deploy applications to Kubernetes with one click.

KubeSphere DevOps is more independent

KubeSphere DevOps has evolved since V3.2.0 into a standalone project, KS-Devops, where end users are free to choose any Kubernertes to run in. Currently, the back-end portion of KS-Devops can be installed via Helm Chart.

As a CI engine with a huge user base and a rich ecosystem, we’re going to let Jenkins really “play” the role of the engine — step back and continue to provide a steady pipeline of functionality. The addition of CRD PipelineRun to encapsulate pipeline execution records reduces the number of apis that directly interact with Jenkins and improves the performance of THE CI pipeline.

Starting with V3.2.0, KubeSphere DevOps has added support for building mirrors in containerd-based pipelining. In the future, KubeSphere DevOps will be a standalone project, supporting both front and back end standalone deployment, introducing GitOps tools such as Tekton and ArgoCD, and integrating project management and test management platforms.

Cluster deployment is more flexible

KubeSphere provides KubeKey and KS-Installer deployment modes for users in self-built K8s clusters and existing K8s clusters.

KubeKey is an efficient cluster deployment tool developed by KubeSphere community. It uses Docker by default and can also connect to Containerd Ci-O iSula and other CRI runtimes. ETCD cluster runs independently and supports separate deployment from K8s. Improve environment deployment flexibility.

If you use KubeKey to deploy Kubernetes and KubeSphere, the following features are also noteworthy:

  • Support to Kubernetes latest version v1.22.1, and compatible with 4 versions, at the same time KubeKey also added support for deployment of K3s experimental function.
  • Automatic update of Kubernetes cluster certificates is supported
  • Supports the Internal LoadBalancer high availability deployment mode, reducing cluster deployment complexity
  • Most of the integrated components such as Istio, Jaeger, Prometheus Operator, Fluent Bit, KubeEdge, and Nginx Ingress Controller have been updated to newer versions upstream, See Release Notes 3.2.0 for details

The user experience

SIG Docs members have also redesigned and optimized the Chinese and English text for the Console interface to make the text and terminology more professional and accurate. Hard coding and concatenating UI strings on the front end were removed to better support localization and internationalization of the Console interface.

In addition, the KubeSphere community has a number of in-depth users to enhance some features of the front end, such as new support for image search of Harbor mirror repository, added support for mounting storage volumes to Init Container, and removed automatic workload restart when storage volume expansion, etc.

See Release Notes 3.2.0 for more user experience optimizations, feature enhancements, and Bug fixes. KubeSphere 3.2.0 can be downloaded online via the two commands in the official documentation. Offline installation will also be available in the community in about a week.

Thank you

Below are the GitHub ids of the contributors who contributed to KubeSphere 3.2.0 code and documentation. Please contact us if there is any omission in this list.

About KubeSphere

KubeSphere (KubeSphere. IO) is an open source container hybrid cloud built on Top of Kubernetes, providing full-stack IT automated operation and maintenance capabilities, simplifying enterprise DevOps workflows.

KubeSphere Aqara Smart home, Ericsson, originally life, neusoft, vauen, sina, sany, huaxia bank, sichuan airlines, its group, number of small Banks, hangzhou run science and technology, zijin insurance, where net, zhongtong, the People’s Bank of China, bank of China, China helped life insurance, China Pacific insurance, China mobile, China telecom, physical cloud, in jinke, Radore, ZaloPay and other thousands of enterprises at home and abroad to adopt. KubeSphere offers a developer-friendly wizarding interface and rich enterprise-level functionality, Including Kubernetes multi-cloud and multi-cluster management, DevOps (CI/CD), application life cycle management, edge computing, Service Mesh, multi-tenant management, observability, storage and network management, GPU support and other functions. Help enterprises quickly build a powerful and feature-rich container cloud platform.

GitHub:github.com/kubesphere

Official website (China) : kubesphere.com.cn

Wechat group: please search to add group assistant VX kubesphere

This article is published by OpenWrite!