Not only can you deploy simple applications, but you can also use the Kubernetes operator for day 2 operations.

In my first article why Kubernetes is a Dump truck, I talked about how Kubernetes is good at defining, sharing, and running applications, similar to how dump trucks are good at moving garbage. In the second article, how to cross the Kubernetes learning curve, I explained that the Kubernetes learning curve is actually the same as running any application in a production environment, This is definitely easier than learning all the traditional components (load balancers, routers, firewalls, switches, clustering software, clustered file systems, etc.). This is DevOps, a collaboration between developers and operations to specify how things work in a production environment, which means that both sides need to learn. In the third Kubernetes Basics: Learning How to Use First, I redesigned the Kubernetes learning framework, focusing on driving a dump truck rather than building or equipping a dump truck. In the fourth article, four tools to help you navigate Kubernetes, I shared my favorite tools to help you build applications in Kubernetes (driving a dump truck).

In this final article, I’ll share why I’m so excited about the future of running applications on Kubernetes.

From the beginning, Kubernetes was able to run Web-based workloads (containerized) just fine. Workloads such as Web servers, Java, and associated application servers (PHP, Python, and so on) all work fine. The platform handles support services such as DNS, load balancing, and SSH (replaced by Kubectl Exec). For most of my career, these were the workloads I ran in production environments, so I immediately realized the power of using Kubernetes to run production workloads in addition to DevOps, in addition to Agile. We can be more efficient even if we barely change our cultural habits. Debugging and decommissioning becomes very easy, which is extremely difficult for traditional IT. So, from the early days, Kubernetes gave me all the basic primitives I needed to model my production workload in a single configuration language (Kube YAML/Json).

But what happens if you need to run multi-master MySQL with replication? What about using Galera’s redundant data? How do you take snapshots and backups? What about a job as complex as SAP? With Kubernetes, day 0 (deployment) of a simple application (Web server, etc.) is fairly straightforward, but does not address day 2 operations and workloads. This is not to say that day 2 operations with complex workloads are harder to solve than traditional IT, but using Kubernetes does not make them any easier. It’s up to each user to devise their own ingenious ideas to solve these problems, which is basically the status quo today. The first type of problem I’ve encountered over the past five years is day 2 operations for complex workloads. Day 0 of the software life cycle (LCTT) Day 1 is the software development and deployment phase; Day 2 refers to the software operation and maintenance phase in the production environment.

Thankfully, this is changing with the advent of the Kubernetes Operator. With the advent of operations, we now have a framework to aggregate day 2 operations knowledge into the platform. Now we can apply my basics in Kubernetes: first learning how to use the same methods described in define state, actual state, and now we can define, automate, and maintain a wide variety of system administration tasks.

(LCTT Operator is a Kubernetes component that performs the specific job of an operations engineer.)

I often refer to operations as “systems management robots,” because they essentially sort out operations knowledge on the second day of work, This knowledge relates to the types of workloads (databases, Web servers, and so on) targeted by a Subject Matter Expert (SME, for example, a database administrator or a system administrator), and is usually documented somewhere in a Wiki. The problem with putting this knowledge in wikis is that in order to apply this knowledge to solving problems, we need to:

  1. Generate events, usually monitoring systems find faults, and then we create a fault ticket
  2. SME people have to investigate this problem, even if it’s something we’ve seen millions of times before
  3. SME personnel must perform this knowledge (perform backup/restore, configure Galera or transactional replication, etc.)

With the operator, all of this SME knowledge can be embedded into a single container image that is deployed before the actual workload is available. We deploy the operator container, which then deploys and manages one or more instances of the workload. We then use something like the “Operator Life Cycle Manager” (Katacoda tutorial) to manage the operator.

So, as we move forward with Kubernetes, we simplify not only the deployment of the application, but also the management of the entire life cycle. The operator also provides us with tools to manage very complex stateful applications with deep configuration requirements (clustering, replication, repair, backup/restore). And, best of all, the person building the container might be a subject matter expert doing day 2 operations, so now they can embed this knowledge into the operational environment.

Summary of this series

The future of Kubernetes is bright, and just like virtualization before it, workload scaling is inevitable. Learning how to harness Kubernetes is probably the biggest investment a developer or system administrator can make in their career. As the workload increases, so will career opportunities. So this is driving an amazing dump truck that is very graceful when moving garbage…

You might want to follow me on Twitter, where I share a lot about this topic at @FatherLinux.


Via: opensource.com/article/19/…

By Scott McCarty, lujun9972

This article is originally compiled by LCTT and released in Linux China