OpenKruise, a cloud native application automation management suite and CNCF Sandbox project, has recently released v1.0.
OpenKruise [1] is a suite of enhanced capabilities for Kubernetes, focusing on the deployment, upgrade, operation and maintenance, stability protection and other fields of cloud native applications. All functions are extended in standard ways such as CRD and can be applied to any Kubernetes cluster above 1.16. Kruise deployment can be completed with a single helm command without further configuration.
In general, the capabilities provided by OpenKruise can be divided into several areas:
- Application workloads: Advanced deployment publishing strategies for stateless, stateful, daemons, etc., such as in-place upgrade, grayscale streaming publishing, etc.
- Sidecar container management: Supports independent Sidecar container definition, dynamic injection, independent in-place upgrade, and hot upgrade.
- Enhanced OPERATION and maintenance (O&M) capabilities: Container restart in situ, image prepull, and container startup sequence guarantee.
- Application zone management: Manages the deployment ratio, sequence, and priority of applications in multiple zones (availability zones and different models).
- Application security: To help applications on Kubernetes to obtain higher security and availability protection.
Version of the resolution
In v1.0, OpenKruise brings a number of new features, as well as enhancements and optimizations to existing features.
First of all, from v1.0 OpenKruise upgraded CRD/WehhookConfiguration and other resource configurations from V1beta1 to V1, so it can support Kubernetes V1.22 and above cluster. However, it is also required that the version of Kubernetes be no lower than V1.16.
The following is a brief introduction to some functions of V1.0. For the detailed ChangeLog list, please refer to the release description on OpenKruise Github and the official website document.
Supports in-place upgrade of environment variables
Author: @FillZpp [2]
OpenKruise has supported “upgrade in place” functionality since earlier versions, primarily for CloneSet and Advanced StatefulSet workloads. Simply put, in-place upgrade enables an application to upgrade by modifying the container configuration of the Pod without deleting or creating new Pod objects.
As shown in the figure above, only the fields in the Pod are changed during the in-place upgrade, so:
- Additional operations and costs such as scheduling, IP assignment, assignment, and mounting disks can be avoided.
- Faster image pull, because open source reuses most of the layers that already have the old image, and only needs to pull some of the layers that the new image changes.
- When a container is upgraded in place, the Pod network, mount disk, and other containers in the Pod are not affected and continue to operate.
However, OpenKruise in the past could only upgrade in place for the update of the Image field in Pod, and could still only use a rebuild upgrade similar to Deployment for other fields. We’ve been getting a lot of feedback from users wanting to support in-place upgrades for more fields like env — something that’s difficult to do due to the limitations of Kube-Apiserver.
After our persistent efforts, OpenKruise finally supported the in-place upgrade of env environment variables through the program API in V1.0. For example, for the following CloneSet YAML, the user defines the configuration in the annotation and associates it with the corresponding ENV. In the subsequent configuration modification, Kruise only needs to update the value of annotation value, and Kruise will trigger in-situ reconstruction of all containers in Pod env that reference this annotation, so as to take effect the new value configuration.
apiVersion: apps.kruise.io/v1alpha1
kind: CloneSet
metadata:
...
spec:
replicas: 1
template:
metadata:
annotations:
app-config: "... the real env value ..."
spec:
containers:
- name: app
env:
- name: APP_CONFIG
valueFrom:
fieldRef:
fieldPath: metadata.annotations['app-config']
updateStrategy:
type: InPlaceIfPossible
Copy the code
At the same time, in this release, we also removed the old imageID limit for image in-place upgrades, which supports two image replacement upgrades with the same imageID.
Please refer to the document [3] for specific usage.
Configure distribution across namespaces
Author: @veophi [4]
In the scenario of cross-namespace distribution and synchronization of namespaces -scoped resources such as Secret and ConfigMap, native Kubernetes only supports users’ one-by-one manual distribution and synchronization, which is very inconvenient.
Typical cases are:
- If you want to use SidecarSet’s imagePullSecrets capability, you need to repeatedly create Secret Spaces in the corresponding namespaces and ensure that these Secret configurations are correct and consistent.
- If you want to configure some common environment variables using ConfigMap, you need to distribute ConfigMap across multiple Namespaces.
Therefore, in the face of these scenarios requiring ResourceDistribution and multiple synchronization across namespaces, we wanted a more convenient distribution and synchronization tool to automate this. We designed and implemented a new CRD ResourceDistribution.
ResourceDistribution currently supports distribution and synchronization of Secret and ConfigMap resources.
apiVersion: apps.kruise.io/v1alpha1
kind: ResourceDistribution
metadata:
name: sample
spec:
resource:
apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
data:
...
targets:
namespaceLabelSelector:
...
# or includedNamespaces, excludedNamespaces
Copy the code
As shown in YAML, ResourceDistribution is a cluster-scoped CRD. It consists of two fields: resource and targets. The resource field describes the resources to be distributed. The targets field describes the target namespace that the user wants to distribute.
Please refer to the document [5] for specific usage.
Container start sequence control
Author: @Concurrensee [6]
For A Pod in Kubernetes, there may be dependencies between containers, such that the application process in container B depends on the application in container A. Therefore, there is a need for a sequential relationship between multiple containers:
- Container A can be started first, and container B can be started only after container A is successfully started
- Container B exits first and container A can be stopped only after the exit is complete
In general, the Pod container startup and exit sequence is managed by Kubelet. Kubernetes had a KEP plan to add a Type field in containers to identify the start-stop priority of different types of containers. However, this KEP has been rejected for the time being because sig-Node considers too much change to the existing code architecture.
Therefore, OpenKruise in V1.0 provides a feature called Container Launch Priority to control the mandatory Launch order of multiple containers in a Pod:
-
Annotations for any Pod object, define apps.kruise. IO /container-launch-priority in Annotations: Ordered, Kruise will ensure the serial startup of containers according to the list of containers in Pod.
-
If you want to customize the startup order of multiple containers in containers, add the KRUISE_CONTAINER_PRIORITY environment variable to the container env, and value is an integer in the range [-2147483647, 2147483647]. The higher the priority value of a container is, the sooner startup is guaranteed.
Please refer to the document [7] for specific usage.
Kubectl-kruise command line tool
Author: @hantmac [8]
In the past, OpenKruise provides kruise API definition and client encapsulation of Go, Java and other languages through Kruise-API, client-Java and other warehouses, which can be introduced and used by users in their own applications. However, there are still many users who need the flexibility to use the workload resources on the command line in the test environment.
However, rollout, set image and other commands provided by native Kubectl tools are only applicable to native workload types, such as Deployment and StatefulSet, and cannot recognize the extended workload type in OpenKruise.
Therefore, OpenKruise recently provides the Kubectl-Kruise command line tool, which is a standard kubectl plug-in that provides many functions applicable to OpenKruise workload.
# rollout undo cloneset $ kubectl kruise rollout undo cloneset/nginx # rollout status advanced statefulset $ kubectl kruise rollout status statefulsets.apps.kruise.io/sts-demo # set image of a cloneset $ kubectl kruise set image Cloneset/nginx busybox = busybox nginx = nginx: 1.9.1Copy the code
Please refer to the document [9] for specific usage.
Other parts of the function improvement and optimization
CloneSet:
- Through scaleStrategy maxUnavailable strategy support flow capacity
- Stable Revision Determines logical changes and is marked currentRevision when all Pod versions are consistent with updateRevision
WorkloadSpread:
- Supports the takeover of existing PODS to a matching subset
- Optimize webhook update and retry logic for Pod injection
Advanced DaemonSet:
- Support for in-place updates to Daemon Pods
- Progressive annotations are introduced to select whether to limit Pod creation by partition
SidecarSet:
- Resolve SidecarSet filtering to block Inactive Pod
- Add the SourceContainerNameFrom and EnvNames fields to the TransferEnv to solve the problem of redundancy in the case of inconsistent Container names and a large number of EnVs
PodUnavailableBudget:
- Added the Protection Bypass flag
- The PodUnavailableBudget Controller monitors REPLICas changes for workload workload
NodeImage:
- Add –nodeimage-creation-delay and wait until the new Node is ready to create a nodeImage
UnitedDeployment:
- Solve Pod NodeSelectorTerms length 0 when NodeSelectorTerms is nil
Other optimization:
- Kruise-daemon operates Pod resources using the protobuf protocol
- Expose cache resync as a command line parameter and set the default value to 0 in chart
- Fix HTTP Checker refresh for CERTS updates
- Instead of relying on forked Controller-tools, use native Controller-tools with markers
Community participation
You are welcome to join the OpenKruise community via Github/Slack/ Dingding/wechat. Do you have something you want to communicate with our community? Share your voice at our community biweekly meeting [10], or join the discussion through the following channels:
- Join community Slack Channel [11]
- Join community nail group: search group number 23330762 (Chinese)
- Join the community wechat group: Add user OpenKruise and let the robot pull you into the group (Chinese)
The resources
[1] OpenKruise:
https://openkruise.io
[2] @FillZpp:
https://github.com/FillZpp
[3] documents:
/docs/core-concepts/inplace-update
[4] @veophi:
https://github.com/veophi
[5] documents:
/docs/user-manuals/resourcedistribution
[6] @Concurrensee:
https://github.com/Concurrensee
[7] documents:
/docs/user-manuals/containerlaunchpriority
[8] @hantmac:
https://github.com/hantmac
[9] documents:
/docs/cli-tool/kubectl-plugin
[10] Community Biweekly Meeting:
https://shimo.im/docs/gXqmeQOYBehZ4vqo
[11] Slack channel:
https://kubernetes.slack.com/channels/openkruise
Check out the official homepage and documents of OpenKruise project!