Besides Kubernetes, what other important container choreography tools are there?
Kubernetes is the most popular container orchestration platform, both in production environment adoption rate and cloud native ecosystem. However, Kubernetes is not the only option for enterprises. There are also a number of container choreography tools available for different infrastructure environments. Many of them, such as OpenShift, AWS EKS, Docker Swarm and many more, have already achieved high user acceptance and adoption. This article takes a look at these container choreography platforms.
OpenShift
Red Hat’s OpenShift Container Platform as a Service (PaaS) is an automated application on secure and scalable resources in today’s hybrid cloud environment. It provides an enterprise-level platform for building, deploying, and managing containerized applications.
It is built on red Hat Enterprise Linux and the Kubernetes engine. Openshift has a variety of capabilities to manage clusters through the UI and CLI. In addition, Red Hat offers two variants of Openshift Online, a software-as-a-service product; OpenShift Dedicated is a managed service product.
OpenShift Origin (Origin Community Distribution) is an open source upstream Community project used by OpenShift Container platforms OpenShift Online and OpenShift Dedicated.
Nomad
Nomad is a simple, flexible, and easy-to-use workload coordinator that can deploy and manage containerized and non-containerized applications on a large scale both internally and in the cloud. Nomad runs as a single binary file, consumes only 35MB of resources, and is available on macOS, Windows, and Linux.
Users can use declarative Infrastructure Code (IaC) to deploy their applications and define how they should be deployed. Nomad automatically recovers applications from failures.
Nomad organizes any kind of application (not just a container). It provides first-class support for Docker, Windows, Java, VM, and more.
In addition, Nomad modernizes older applications without rewriting them; Easy multi-cloud, native integration with Terraform, Consul and Vault.
Docker Swarm
Docker Swarm uses a declarative model to define the desired state of the service, which Docker will maintain. Docker Enterprise has integrated Kubernetes with Swarm, and Docker now offers the flexibility of choreography engine selection. The Docker Engine CLI is used to create a large number of Docker engines in which application services can be deployed.
The Docker command is used to interact with clusters. The machines that join the cluster are called nodes, and Swarm handles the activity of the cluster.
Docker Swarm consists of two main components. The Manager Manager Node assigns tasks to Worker nodes in the cluster. Elect leaders according to Raft consistency algorithm. The leader handles all cluster management and task choreography decisions for the cluster. The Worker Node receives tasks from the slave management Node and executes them.
Docker Swarm is also very powerful, it integrates with Docker Engine for cluster management; Adopt distributed design; Declarative service model; It also includes features such as multi-host networking, service discovery, load balancing, and rolling updates.
Docker Compose
Docker Compose is used to define and run multi-container applications that work together. Docker Compose describes mutually shared groups of services that share software dependencies and are choreographed and scaled.
You can use YAML files (dockerfiles) to configure services for your application. Then, use the docker-compose up command to create and start all the services from the configuration.
Docker Compose can also be used to decompose application code into several independently running services that communicate using an internal network. It provides a CLI for managing the entire life cycle of an application. Docker Compose has traditionally been focused on development and test workflows, but is now more production-oriented.
Docker Engine can be a standalone instance of a Docker Machine or an entire Docker Swarm cluster.
The main features are multiple isolated environments on a single host; Retain volume data when creating containers. Recreate only the containers that have changed; Synthetic movement between variables and environments, etc.
Minikube
Minikube allows users to run Kubernetes locally. Using Minikube, you can test applications locally within a single-node Kubernetes cluster on a personal computer. Minikube provides integration support for the Kubernetes dashboard.
Minikube runs the latest stable version of Kubernetes and supports load balancing, multiple clusters, persistent volumes, node ports, container runtimes including Docker, Cri-O and Containered, CNI enabled, and more.
Marathon
Marathon works on Apache Mesos with the ability to coordinate applications and frameworks.
Apache Mesos is an open source cluster manager. Mesos is an Apache project that can run containerized and non-containerized workloads simultaneously. The main components in a Mesos cluster are the Mesos agent node, the Mesos master node, ZooKeeper, and the framework — the framework coordinates with the master node to schedule tasks to the proxy node. Users interact with the Marathon framework to schedule jobs.
The Marathon scheduler uses ZooKeeper to locate the current main program to submit the task. Both Marathon scheduler and Mesos master servers run slave servers to ensure high availability. The client interacts with Marathon using the REST API.
Marathon’s strengths include high availability, support for stateful applications, user friendliness, support for service discovery and load balancing, health checks, and REST apis.
Cloudify
Cloudify is an open source cloud choreography tool for container and microservices deployment automation and lifecycle management. It provides features such as on-demand clustering, automatic repair, and scaling at the infrastructure level. Cloudify manages the container infrastructure and coordinates services that run on the container platform.
It easily integrates with Docker and Docker-based container managers, including Docker Swarm, Docker Compose, Kubernetes, and Apache Mesos.
Cloudify helps create, repair, expand, and dismantle container clusters. Container choreography is key to providing a scalable and highly available infrastructure on which the container manager can run. Cloudify provides the ability to coordinate heterogeneous services across platforms. You can deploy the application using CLI and Cloudify Manager.
Rancher
Rancher is also an open source container orchestration platform. It takes advantage of services such as Kubernetes, Swarm, and Mesos. Rancher provides the software needed to manage containers, so businesses don’t have to build container services platforms from scratch using a unique set of open source technologies.
Rancher 2.x allows you to manage a Kubernetes cluster running on a customer-specified provider. The Rancher user interface allows you to manage thousands of Kubernetes clusters and nodes.
Containership
Containership mainly implements the deployment and management of multi-cloud Kubernetes infrastructure. A single tool provides flexibility to operate in public, private cloud and local environments. It enables users to configure, manage, and monitor the Kubernetes cluster across all major cloud providers.
Containership is built using cloud-native tools such as Terraform for configuration, Prometheus for monitoring, and Calico for network and policy management. It was built on the vanilla Kubernetes version. The Containership platform provides intuitive dashboards and powerful REST apis for complex automation. AZK AZK is an open source orchestration tool for the development environment through the manifest file (azkfile.js). This file helps developers install, configure, and run common tools to develop Web applications using different open source technologies. 9.jpeg
AZK uses containers instead of virtual machines. Containers, like virtual machines, have better performance and lower physical resource consumption.
You can reuse the azkfile.js file to add new components or create new components from scratch. It can be shared, which ensures a perfect balance between development environments on different programmer machines and reduces the chance of errors during deployment.
AWS EKS
AWS EKS Amazon AWS Container Choreography service. AWS allows users to run EKS clusters using AWS Fargate, which is serverless computing for containers. AWS Fargate eliminates the need to configure and manage servers, allowing per-application payment by resource.
AWS allows the use of other features through EKS, such as Amazon CloudWatch, Amazon Virtual Private Cloud (VPC), AWS Identity, Auto Scaling groups and Access Management (IAM), monitoring, Scaling and load balancing applications. EKS is integrated with AWS App Mesh and provides a Kubernetes native experience. EKS runs the latest Kubernetes and is certified by Kubernetes.
GKE
GKE is a container choreography service on Google Cloud. The GKE cluster is supported by Kubernetes and can interact with the cluster using the Kubernetes CLI. The Kubernetes command can be used to deploy and manage applications, perform administrative tasks, set policies, and monitor the health of deployed workloads.
Advanced management features of Google Cloud are also available for GKE clustering, such as Google Cloud load balancing, node pooling, automatic node expansion, automatic upgrade, automatic node repair, logging and monitoring using Google Cloud’s suite of operations.
AKS
AKS is a container choreography service from Azure that provides serverless Kubernetes, security, and governance. AKS manages the Kubernetes cluster and automatically configures all the Kubernetes master nodes and nodes. Users only need to manage and maintain the proxy node.
AKS is also free, and you only pay for the proxy nodes in the cluster, not the master node. Users can programmatically create AKS clusters in the Azure portal. Azure also supports other features such as advanced networking, Azure Active Directory integration, and monitoring using Azure Monitor.
AKS also supports Windows Server containers. Its cluster and deployed application performance can be monitored from Azure Monitor. Logs are stored in the Azure Log Analytics workspace. AKS is authenticated by Kubernetes.
※ Some articles from the network, if any infringement, please contact to delete; More articles and materials | click behind the text to the left left left 100 gpython self-study data package Ali cloud K8s practical manual guide] [ali cloud CDN row pit CDN ECS Hadoop large data of actual combat operations guide the conversation practice manual manual Knative cloud native application development guide OSS Operation and maintenance actual combat manual cloud native architecture white paper Zabbix enterprise distributed monitoring system source document 10G large factory interview questions