Chaos Mesh® is a Chaos Engineering system that runs on Kubernetes and was developed by PingCAP, the company behind TiDB. In short, Chaos Mesh® creates Chaos (simulated failure) in a CLUSTER by using a “privileged” container running in a K8s cluster based on test scenarios in CRD resources [1].
This paper explores the practice of Chaos engineering on Kubernetes cluster, understands the working principle of Chaos Mesh® based on source code analysis, and illustrates how to develop the control plane of Chaos Mesh® with code examples. If you lack the basics, see the links in the notes at the end of this article for a general understanding of the Chaos Mesh® architecture.
The test code of this article is located in the mayocream/chaos-mesh-controlpanel-demo warehouse.
How to Create Chaos
Chaos Mesh® is a great tool for Chaos engineering on Kubernetes. How does it work?
Privileged mode
As mentioned above Chaos Mesh® runs Kubernetes privileged containers to create failures. A Pod running as a Daemon Set grants the container’s Capabilities at run time.
apiVersion:apps/v1kind:DaemonSetspec: template: metadata:... spec: containers: -name:chaos-daemon securityContext: {{-if.Values.chaosDaemon.privileged}} privileged:true capabilities: add: -SYS_PTRACE {{-else}} capabilities: add: -SYS_PTRACE -NET_ADMIN -MKNOD -SYS_CHROOT -SYS_ADMIN -KILL # CAP_IPC_LOCK is used to lock memory -IPC_LOCK {{-end}}
Copy the code
These Linux authority words are used to grant the container privileges to create and access the /dev/fuse fuse pipeline [2] (FUSE is the Linux user-space file system interface, which enables unprivileged users to create their own file systems without editing kernel code). See #1109 Pull Request, the Daemon Set uses CGO to call the Linux makedev function to create the FUSE pipe.
// #include <sys/sysmacros.h>// #include <sys/types.h>// // makedev is a macro, so a wrapper is needed// dev_t Makedev(unsigned int maj, unsigned int min) {// return makedev(maj, min); // }// EnsureFuseDev ensures /dev/fuse exists. If not, it will create onefunc EnsureFuseDev() { if _, err := os.Open("/dev/fuse"); os.IsNotExist(err) { // 10, 229 according to https://www.kernel.org/doc/Documentation/admin-guide/devices.txt fuse := C.Makedev(10, 229) syscall.Mknod("/dev/fuse", 0o666|syscall.S_IFCHR, int(fuse)) }}Copy the code
Also in #1103 PR, Chaos Daemons enable privileged mode by default, setting Privileged: true in the container’s securityContext.
Kill the Pod
PodKill, PodFailure, and ContainerKill all fall under the PodChaos category. PodKill is a random Pod that is killed. The PodKill implementation actually calls the API Server to send the Kill command.
import ( "context" v1 "k8s.io/api/core/v1" "sigs.k8s.io/controller-runtime/pkg/client")type Impl struct { client.Client}func (impl *Impl) Apply(ctx context.Context, index int, records []*v1alpha1.Record, obj v1alpha1.InnerObject) (v1alpha1.Phase, error) { ... err = impl.Get(ctx, namespacedName, &pod) if err ! = nil { // TODO: handle this error return v1alpha1.NotInjected, err } err = impl.Delete(ctx, &pod, &client.DeleteOptions{ GracePeriodSeconds: &podchaos.Spec.GracePeriod, // PeriodSeconds has to be set specifically }) ... return v1alpha1.Injected, nil}Copy the code
GracePeriodSeconds parameter applies to K8s forced termination Pod. For example, we use kubectl delete Pod –grace-period=0 –force to delete Pod quickly. PodFailure is the replacement of an image in a Pod with an incorrect image through the Patch Pod object resource. Chaos only modified the image fields of containers and initContainers, which is also because most of the fields of Pod cannot be changed. For details, see Pod update and Replacement.
func (impl *Impl) Apply(ctx context.Context, index int, records []*v1alpha1.Record, obj v1alpha1.InnerObject) (v1alpha1.Phase, error) { ... pod := origin.DeepCopy() for index := range pod.Spec.Containers { originImage := pod.Spec.Containers[index].Image name := pod.Spec.Containers[index].Name key := annotation.GenKeyForImage(podchaos, name, false) if pod.Annotations == nil { pod.Annotations = make(map[string]string) } // If the annotation is already existed, we could skip the reconcile for this container if _, ok := pod.Annotations[key]; ok { continue } pod.Annotations[key] = originImage pod.Spec.Containers[index].Image = config.ControllerCfg.PodFailurePauseImage } for index := range pod.Spec.InitContainers { originImage := pod.Spec.InitContainers[index].Image name := pod.Spec.InitContainers[index].Name key := annotation.GenKeyForImage(podchaos, name, true) if pod.Annotations == nil { pod.Annotations = make(map[string]string) } // If the annotation is already existed, we could skip the reconcile for this container if _, ok := pod.Annotations[key]; ok { continue } pod.Annotations[key] = originImage pod.Spec.InitContainers[index].Image = config.ControllerCfg.PodFailurePauseImage } err = impl.Patch(ctx, pod, client.MergeFrom(&origin)) if err ! = nil { // TODO: handle this error return v1alpha1.NotInjected, err } return v1alpha1.Injected, nil}Copy the code
The default container image used to cause a failure is gcr. IO/Google-containers /pause:latest, which can be replaced with registry.aliyuncs.com if it is used in a domestic environment with a high probability of uncompatibility. ContainerKill differs from PodKill and PodFailure, both of which use THE K8s API Server to control the Pod life cycle. ContainerKill is done using the Chaos Daemon running on cluster nodes. Specifically, ContainerKill makes GRPC calls to the Chaos Daemon by running the client through the Chaos Controller Manager.
func (b *ChaosDaemonClientBuilder) Build(ctx context.Context, pod *v1.Pod) (chaosdaemonclient.ChaosDaemonClientInterface, error) { ... daemonIP, err := b.FindDaemonIP(ctx, pod) if err ! = nil { return nil, err } builder := grpcUtils.Builder(daemonIP, config.ControllerCfg.ChaosDaemonPort).WithDefaultTimeout() if config.ControllerCfg.TLSConfig.ChaosMeshCACert ! = "" { builder.TLSFromFile(config.ControllerCfg.TLSConfig.ChaosMeshCACert, config.ControllerCfg.TLSConfig.ChaosDaemonClientCert, config.ControllerCfg.TLSConfig.ChaosDaemonClientKey) } else { builder.Insecure() } cc, err := builder.Build() if err ! = nil { return nil, err } return chaosdaemonclient.New(cc), nil}Copy the code
When sending commands to Chaos Daemons, a client is created based on the Pod information. For example, to control the Pod on a Node, the client is created by obtaining the ClusterIP of the Node where the Pod resides. If the TLS certificate configuration exists, the Controller Manager adds the TLS certificate to the client. Chaos Daemons attach certificates to enable GRPCS if they have a TLS certificate at startup. TLS check configuration RequireAndVerifyClientCert said enable two-way TLS authentication (mTLS).
func newGRPCServer(containerRuntime string, reg prometheus.Registerer, tlsConf tlsConfig) (*grpc.Server, error) { ... if tlsConf ! = (tlsConfig{}) { caCert, err := ioutil.ReadFile(tlsConf.CaCert) if err ! = nil { return nil, err } caCertPool := x509.NewCertPool() caCertPool.AppendCertsFromPEM(caCert) serverCert, err := tls.LoadX509KeyPair(tlsConf.Cert, tlsConf.Key) if err ! = nil { return nil, err } creds := credentials.NewTLS(&tls.Config{ Certificates: []tls.Certificate{serverCert}, ClientCAs: caCertPool, ClientAuth: tls.RequireAndVerifyClientCert, }) grpcOpts = append(grpcOpts, grpc.Creds(creds)) } s := grpc.NewServer(grpcOpts...) grpcMetrics.InitializeMetrics(s) pb.RegisterChaosDaemonServer(s, ds) reflection.Register(s) return s, nil}Copy the code
Chaos Daemons provide the following GRPC call interfaces:
// ChaosDaemonClient is the client API for ChaosDaemon service.//// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.type ChaosDaemonClient interface { SetTcs(ctx context.Context, in *TcsRequest, opts ... grpc.CallOption) (*empty.Empty, error) FlushIPSets(ctx context.Context, in *IPSetsRequest, opts ... grpc.CallOption) (*empty.Empty, error) SetIptablesChains(ctx context.Context, in *IptablesChainsRequest, opts ... grpc.CallOption) (*empty.Empty, error) SetTimeOffset(ctx context.Context, in *TimeRequest, opts ... grpc.CallOption) (*empty.Empty, error) RecoverTimeOffset(ctx context.Context, in *TimeRequest, opts ... grpc.CallOption) (*empty.Empty, error) ContainerKill(ctx context.Context, in *ContainerRequest, opts ... grpc.CallOption) (*empty.Empty, error) ContainerGetPid(ctx context.Context, in *ContainerRequest, opts ... grpc.CallOption) (*ContainerResponse, error) ExecStressors(ctx context.Context, in *ExecStressRequest, opts ... grpc.CallOption) (*ExecStressResponse, error) CancelStressors(ctx context.Context, in *CancelStressRequest, opts ... grpc.CallOption) (*empty.Empty, error) ApplyIOChaos(ctx context.Context, in *ApplyIOChaosRequest, opts ... grpc.CallOption) (*ApplyIOChaosResponse, error) ApplyHttpChaos(ctx context.Context, in *ApplyHttpChaosRequest, opts ... grpc.CallOption) (*ApplyHttpChaosResponse, error) SetDNSServer(ctx context.Context, in *SetDNSServerRequest, opts ... grpc.CallOption) (*empty.Empty, error)}Copy the code
Network fault
From the initial #41 PR, it was clear that Chaos Mesh® network error injection was done by calling the pbClient.setnetem method, which encapsulates the parameters into a request and hands it to the Chaos Daemon on Node. (Note: this is the code from early 2019. As the project develops, the functions in the code have been scattered into different files.)
func (r *Reconciler) applyPod(ctx context.Context, pod *v1.Pod, networkchaos *v1alpha1.NetworkChaos) error { ... pbClient := pb.NewChaosDaemonClient(c) containerId := pod.Status.ContainerStatuses[0].ContainerID netem, err := spec.ToNetem() if err ! = nil { return err } _, err = pbClient.SetNetem(ctx, &pb.NetemRequest{ ContainerId: containerId, Netem: netem, }) return err}Copy the code
Also in the PKG/ChaosDaemon package, we can see how Chaos Daemons handle requests.
func (s *Server) SetNetem(ctx context.Context, in *pb.NetemRequest) (*empty.Empty, error) { log.Info("Set netem", "Request", in) pid, err := s.crClient.GetPidFromContainerID(ctx, in.ContainerId) if err ! = nil { return nil, status.Errorf(codes.Internal, "get pid from containerID error: %v", err) } if err := Apply(in.Netem, pid); err ! = nil { return nil, status.Errorf(codes.Internal, "netem apply error: %v", err) } return &empty.Empty{}, nil}// Apply applies a netem on eth0 in pid related namespacefunc Apply(netem *pb.Netem, pid uint32) error { log.Info("Apply netem on PID", "pid", pid) ns, err := netns.GetFromPath(GenNetnsPath(pid)) if err ! = nil { log.Error(err, "failed to find network namespace", "pid", pid) return errors.Trace(err) } defer ns.Close() handle, err := netlink.NewHandleAt(ns) if err ! = nil { log.Error(err, "failed to get handle at network namespace", "network namespace", ns) return err } link, err := handle.LinkByName("eth0") // TODO: check whether interface name is eth0 if err ! = nil { log.Error(err, "failed to find eth0 interface") return errors.Trace(err) } netemQdisc := netlink.NewNetem(netlink.QdiscAttrs{ LinkIndex: link.Attrs().Index, Handle: netlink.MakeHandle(1, 0), Parent: netlink.HANDLE_ROOT, }, ToNetlinkNetemAttrs(netem)) if err = handle.QdiscAdd(netemQdisc); err ! = nil { if ! strings.Contains(err.Error(), "file exists") { log.Error(err, "failed to add Qdisc") return errors.Trace(err) } } return nil}Copy the code
Finally, the Vishvananda/NetLink library was used to manipulate the Linux network interface to get the job done. NetworkChaos, which operates the Linux host network to create chaos, includes tools such as iptables and ipset. In the Chaos Daemon’s Dockerfile you can see the Linux toolchain on which it depends:
RUN apt-get update && \ apt-get install -y tzdata iptables ipset stress-ng iproute2 fuse util-linux procps curl && \ rm -rf /var/lib/apt/lists/*
Copy the code
Pressure test
Stresschaos-style Chaos is also implemented by Chaos Daemons, where the Controller Manager calculates the StressChaos rules and delivers tasks to specific daemons. The assembled parameters are as follows. These parameters are combined into command execution parameters, which are attached to the stress-ng command and then executed [3].
// Normalize the stressors to comply with stress-ngfunc (in *Stressors) Normalize() (string, error) { stressors := "" if in.MemoryStressor ! = nil && in.MemoryStressor.Workers ! = 0 { stressors += fmt.Sprintf(" --vm %d --vm-keep", in.MemoryStressor.Workers) if len(in.MemoryStressor.Size) ! = 0 { if in.MemoryStressor.Size[len(in.MemoryStressor.Size)-1] ! = '%' { size, err := units.FromHumanSize(string(in.MemoryStressor.Size)) if err ! = nil { return "", err } stressors += fmt.Sprintf(" --vm-bytes %d", size) } else { stressors += fmt.Sprintf(" --vm-bytes %s", in.MemoryStressor.Size) } } if in.MemoryStressor.Options ! = nil { for _, v := range in.MemoryStressor.Options { stressors += fmt.Sprintf(" %v ", v) } } } if in.CPUStressor ! = nil && in.CPUStressor.Workers ! = 0 { stressors += fmt.Sprintf(" --cpu %d", in.CPUStressor.Workers) if in.CPUStressor.Load ! = nil { stressors += fmt.Sprintf(" --cpu-load %d", *in.CPUStressor.Load) } if in.CPUStressor.Options ! = nil { for _, v := range in.CPUStressor.Options { stressors += fmt.Sprintf(" %v ", v) } } } return stressors, nil}Copy the code
The Chaos Daemon server handler calls the Go official package OS /exec to run the command. The stress_server_linux. Go file is too hard to use. There are also files with the same name that end in Darwin, presumably for ease of development and debugging on macOS. The shirou/gopsutil package is used in the code to get PID process status, and stdout, stderr and other standard outputs are read. I have seen this processing mode in Hashicorp/Go-plugin, and go-plugin does this better. It was mentioned in my other article Dkron source code analysis [4].
IO injection
Chaos Mesh® uses privileged containers to mount the FUSE device /dev/fuse on the host. I’m sure Chaos Mesh® is about injecting Sidecar containers, mounting FUSE devices, or modifying Pod Volumes to Mount configurations using a Mutating access controller, but it doesn’t work as intuitively as it should. Taking a closer look at #826 PR, which introduces a new IOChaos implementation that avoids Sidecar injection and uses Chaos daemons to operate Linux namespaces directly through runc container low-level commands, Run the Chaos mesh/ Toda FUSE program developed by Rust (using jSON-RPC 2.0 protocol communication) for container IO chaos injection. Note that the new IOChaos implementation does not modify the Pod resource. When the IOChaos experiment definition is created, a PodIoChaos resource will be created for each Pod selected by the selector field. The Owner Reference for PodIoChaos is this Pod. PodIoChaos is also added to a set of Finalizers that can unlock PodIoChaos resources before they are removed.
// Apply implements the reconciler.InnerReconciler.Applyfunc (r *Reconciler) Apply(ctx context.Context, req ctrl.Request, chaos v1alpha1.InnerObject) error { iochaos, ok := chaos.(*v1alpha1.IoChaos) if ! ok { err := errors.New("chaos is not IoChaos") r.Log.Error(err, "chaos is not IoChaos", "chaos", chaos) return err } source := iochaos.Namespace + "/" + iochaos.Name m := podiochaosmanager.New(source, r.Log, r.Client) pods, err := utils.SelectAndFilterPods(ctx, r.Client, r.Reader, &iochaos.Spec) if err ! = nil { r.Log.Error(err, "failed to select and filter pods") return err } r.Log.Info("applying iochaos", "iochaos", iochaos) for _, pod := range pods { t := m.WithInit(types.NamespacedName{ Name: pod.Name, Namespace: pod.Namespace, }) // TODO: support chaos on multiple volume t.SetVolumePath(iochaos.Spec.VolumePath) t.Append(v1alpha1.IoChaosAction{ Type: iochaos.Spec.Action, Filter: v1alpha1.Filter{ Path: iochaos.Spec.Path, Percent: iochaos.Spec.Percent, Methods: iochaos.Spec.Methods, }, Faults: []v1alpha1.IoFault{ { Errno: iochaos.Spec.Errno, Weight: 1, }, }, Latency: iochaos.Spec.Delay, AttrOverrideSpec: iochaos.Spec.Attr, Source: m.Source, }) key, err := cache.MetaNamespaceKeyFunc(&pod) if err ! = nil { return err } iochaos.Finalizers = utils.InsertFinalizer(iochaos.Finalizers, key) } r.Log.Info("commiting updates of podiochaos") err = m.Commit(ctx) if err ! = nil { r.Log.Error(err, "fail to commit") return err } r.Event(iochaos, v1.EventTypeNormal, utils.EventChaosInjected, "") return nil}Copy the code
In the Controller of a PodIoChaos resource, the Controller Manager encapsulates the resource as a parameter and calls the Chaos Daemon interface for actual processing.
// Apply flushes io configuration on podfunc (h *Handler) Apply(ctx context.Context, chaos *v1alpha1.PodIoChaos) error { h.Log.Info("updating io chaos", "pod", chaos.Namespace+"/"+chaos.Name, "spec", chaos.Spec) ... res, err := pbClient.ApplyIoChaos(ctx, &pb.ApplyIoChaosRequest{ Actions: input, Volume: chaos.Spec.VolumeMountPath, ContainerId: containerID, Instance: chaos.Spec.Pid, StartTime: chaos.Spec.StartTime, }) if err ! = nil { return err } chaos.Spec.Pid = res.Instance chaos.Spec.StartTime = res.StartTime chaos.OwnerReferences = []metav1.OwnerReference{ { APIVersion: pod.APIVersion, Kind: pod.Kind, Name: pod.Name, UID: pod.UID, }, } return nil}Copy the code
In the Chaos Daemon code file PKG/chaosDaemon/iochaOS_server. go, the container needs to be injected with a FUSE program, via #2305 Issue, The /usr/local/bin/nsexec -l -p proc/119186/ns-pid -m proc/119186/ns-mnt — usr/local/bin/toda –path TMP –verbose command is executed Info command to run the toda program in a specific Linux Namespace (Namespace), the same Namespace as Pod.
func (s *DaemonServer) ApplyIOChaos(ctx context.Context, in *pb.ApplyIOChaosRequest) (*pb.ApplyIOChaosResponse, error) { ... pid, err := s.crClient.GetPidFromContainerID(ctx, in.ContainerId) if err ! = nil { log.Error(err, "error while getting PID") return nil, err } args := fmt.Sprintf("--path %s --verbose info", in.Volume) log.Info("executing", "cmd", todaBin+" "+args) processBuilder := bpm.DefaultProcessBuilder(todaBin, strings.Split(args, " ")...) . EnableLocalMnt(). SetIdentifier(in.ContainerId) if in.EnterNS { processBuilder = processBuilder.SetNS(pid, bpm.MountNS).SetNS(pid, bpm.PidNS) } ... // JSON RPC call Client, err := JRPC.DialIO(CTX, Receiver, caller) if err! = nil { return nil, err } cmd := processBuilder.Build() procState, err := s.backgroundProcessManager.StartProcess(cmd) if err ! = nil { return nil, err } ... }Copy the code
The following code finally builds the commands to run, which are runc’s underlying Namespace isolation implementation [5] :
// GetNsPath returns corresponding namespace pathfunc GetNsPath(pid uint32, typ NsType) string { return fmt.Sprintf("%s/%d/ns/%s", DefaultProcPrefix, pid, string(typ))}// SetNS sets the namespace of the processfunc (b *ProcessBuilder) SetNS(pid uint32, typ NsType) *ProcessBuilder { return b.SetNSOpt([]nsOption{{ Typ: typ, Path: GetNsPath(pid, typ), }})}// Build builds the processfunc (b *ProcessBuilder) Build() *ManagedProcess { args := b.args cmd := b.cmd if len(b.nsOptions) > 0 { args = append([]string{"--", cmd}, args...) for _, option := range b.nsOptions { args = append([]string{"-" + nsArgMap[option.Typ], option.Path}, args...) } if b.localMnt { args = append([]string{"-l"}, args...) } cmd = nsexecPath } ... }Copy the code
Control plane
Chaos Mesh® is an open source Chaos engineering system based on Apache 2.0 protocol. Based on the above analysis, we know that it has rich capabilities and a good ecosystem. The maintenance team developed user-mode file system (FUSE) chaos-mesh/toda, CoreDNS chaos plug-in chaos-mesh/ K8S_dNS_CHAOS, kernel error injection chaos-mesh/ BPFKI, etc. Here’s what the server-side code would look like if I wanted to build a chaos engineering platform for end users. The example is only a practice and does not represent best practice. If you want to see Real World platform development practices, refer to the Chaos Mesh® official Dashboard. Internally, ubergo/FX dependency injection framework and Controller Runtime Manager mode are used.
functional
All we need to do is implement a server that delivers YAML to the Kubernetes API. The Chaos Controller Manager performs complex rule verification and rules delivery to Chaos Daemons. If you want to use it with your own platform, all you need to do is connect to the CRD resource creation process. Let’s take a look at PingCAP’s official example:
import ( "context" "github.com/pingcap/chaos-mesh/api/v1alpha1" "sigs.k8s.io/controller-runtime/pkg/client")func main() { ... delay := &chaosv1alpha1.NetworkChaos{ Spec: chaosv1alpha1.NetworkChaosSpec{...}, } k8sClient := client.New(conf, client.Options{ Scheme: scheme.Scheme }) k8sClient.Create(context.TODO(), delay) k8sClient.Delete(context.TODO(), delay)}
Copy the code
Chaos Mesh® already provides all the CRD resource definitions of the API, we use Kubernetes API Machinery developed controller-Runtime to simplify the interaction with the Kubernetes API.
The implementation of the chaos
For example, we want to create a PodKill resource that is sent to the Kubernetes API Server and validated by the Chaos Controller Manager’s Validating access Controller. If the data format validation fails, an error is returned at creation time. For details, see the official documentation for creating experiments using the YAML configuration file. NewClient creates a K8s API Client. See the Client creation example.
package mainimport ( "context" "controlpanel" "log" "github.com/chaos-mesh/chaos-mesh/api/v1alpha1" "github.com/pkg/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1")func applyPodKill(name, namespace string, labels map[string]string) error { cli, err := controlpanel.NewClient() if err ! = nil { return errors.Wrap(err, "create client") } cr := &v1alpha1.PodChaos{ ObjectMeta: metav1.ObjectMeta{ GenerateName: name, Namespace: namespace, }, Spec: v1alpha1.PodChaosSpec{ Action: v1alpha1.PodKillAction, ContainerSelector: v1alpha1.ContainerSelector{ PodSelector: v1alpha1.PodSelector{ Mode: v1alpha1.OnePodMode, Selector: v1alpha1.PodSelectorSpec{ Namespaces: []string{namespace}, LabelSelectors: labels, }, }, }, }, } if err := cli.Create(context.Background(), cr); err ! = nil { return errors.Wrap(err, "create podkill") } return nil}Copy the code
The log output of the running program is:
I1021 00:51:55.225502 23781 Request. Go :665] Waited for 1.033116256s due to client-side throttling, not priority and fairness, request: GET:https://***2021/10/21 00:51:56 apply podkillCopy the code
Check PodKill resource status with kubectl:
$ k describe podchaos.chaos-mesh.org -n dev podkillvjn77Name: podkillvjn77Namespace: devLabels: <none>Annotations: <none>API Version: chaos-mesh.org/v1alpha1Kind: PodChaosMetadata: Creation Timestamp: 2021-10-20T16:51:56Z Finalizers: chaos-mesh/records Generate Name: podkill Generation: 7 Resource Version: 938921488 Self Link: /apis/chaos-mesh.org/v1alpha1/namespaces/dev/podchaos/podkillvjn77 UID: afbb40b3-ade8-48ba-89db-04918d89fd0bSpec: Action: pod-kill Grace Period: 0 Mode: one Selector: Label Selectors: app: nginx Namespaces: devStatus: Conditions: Reason: Status: False Type: Paused Reason: Status: True Type: Selected Reason: Status: True Type: AllInjected Reason: Status: False Type: AllRecovered Experiment: Container Records: Id: dev/nginx Phase: Injected Selector Key: . Desired Phase: RunEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FinalizerInited 6m35s finalizer Finalizer has been inited Normal Updated 6m35s finalizer Successfully update finalizer of resource Normal Updated 6m35s records Successfully update records of resource Normal Updated 6m35s desiredphase Successfully update desiredPhase of resource Normal Applied 6m35s records Successfully apply chaos for dev/nginx Normal Updated 6m35s records Successfully update records of resource
Copy the code
The control surface also naturally has the function of querying and obtaining Chaos resources, which is convenient for platform users to view the implementation status of all Chaos tests and manage them. Of course, this is nothing more than calling REST API to send Get/List requests, but in practice, we need to pay attention to the details. Our company has happened that the Controller requests the full amount of resource data every time, causing the load of K8s API Server to increase. Attract shi (m) ト ト (7). Attract shi (m) (7). For example, the Controller Runtime by default reads kubeconFig from multiple locations, flags, environment variables, and Service accounts automatically mounted in Pod. Armosec/Kubescape #21 PR also uses this feature. This tutorial also covers common operations such as how to page, update, overwrite objects, etc. I haven’t seen any tutorials in Chinese or English with this much detail. Examples of Get/List requests:
package controlpanelimport ( "context" "github.com/chaos-mesh/chaos-mesh/api/v1alpha1" "github.com/pkg/errors" "sigs.k8s.io/controller-runtime/pkg/client")func GetPodChaos(name, namespace string) (*v1alpha1.PodChaos, error) { cli := mgr.GetClient() item := new(v1alpha1.PodChaos) if err := cli.Get(context.Background(), client.ObjectKey{Name: name, Namespace: namespace}, item); err ! = nil { return nil, errors.Wrap(err, "get cr") } return item, nil } func ListPodChaos(namespace string, labels map[string]string) ([]v1alpha1.PodChaos, error) { cli := mgr.GetClient() list := new(v1alpha1.PodChaosList) if err := cli.List(context.Background(), list, client.InNamespace(namespace), client.MatchingLabels(labels)); err ! = nil { return nil, err } return list.Items, nil }Copy the code
In this example, manager is used. In this mode, the cache mechanism is enabled to avoid repeated retrieval of large amounts of data.
-
Get Pod
-
Get full data for the first time (List)
-
Update the Watch cache as the data changes
Chaos choreography
Just as the CRI container runtime provides powerful low-level isolation to support stable operation of containers, and container choreography is required for larger, more complex scenarios, Chaos Mesh® provides Schedule and Workflow capabilities. Schedule triggers faults periodically and at intervals based on the specified Cron time. Workflow orchestrates multiple fault tests just like Argo Workflow.
Of course, the Chaos Controller Manager does most of the work for us, and all the control surface needs to do is manage the YAML resources. The only thing to worry about is what functionality to provide to the user.
Platform function
With reference to the Chaos Mesh® Dashboard, we need to consider what features the platform should provide to the end user.
Possible platform feature points:
-
Chaos into
-
Pod collapse
-
Network fault
-
The load test
-
IO fault
-
Event tracking
-
Associated with the alarm
-
Temporal remote sensing
This paper is not only a trial for introducing new technologies into the company, but also a record of self-learning. The previous learning materials and materials referred to when writing this paper are listed here for quick reference.
-
Controller-runtime source code analysis
-
To learn ぶKubebuilder (Japanese tutorial)
-
Kubebuilder Book/Chinese version
-
Kube-controller-manager source code analysis (iii) Informer mechanism
-
Kubebuilder2.0 study notes – advanced use
-
Client-go and Golang source techniques
-
Chaos Mesh – Allows applications to dance with Chaos on Kubernetes
-
Homemade File systems – Gospel for 02 developers, FUSE file systems
-
System stress test tool -stress-ng
-
Dkron source code analysis
-
RunC source code read-through guide to NameSpace