Original text: architecting applications – for – kubernetes

By Justin Ellingwood

Translator: Piglei

Introduction to the

Designing and running applications that are scalable, portable, and robust can be challenging, especially as system complexity increases. The architecture of the application or system itself greatly influences how it operates, how dependent it is on the environment, and how well it is coupled to related components. When an application runs in a highly distributed environment, following certain patterns during the design phase and following certain practices during the operation phase can help us better deal with the most common problems.

While software design patterns and development methodologies can help us produce applications that meet appropriate scalability metrics, the infrastructure and operating environment also influence the operation and maintenance of deployed systems. Technologies like Docker and Kubernetes help teams package, distribute, deploy, and scale applications in distributed environments. Learning how to best navigate these tools will help you become more mobile, controlled, and responsive when managing your application.

In this guide, we’ll explore some guidelines and patterns you might want to adopt to help you better scale and manage your workloads on Kubernetes. While you can run a wide variety of working sets on Kubernetes, your choices can affect operational difficulty and deployment options. How you architecture and build applications, how you package services in containers, how you configure lifecycle management, and how you operate on Kubernetes affects your experience at every point.

Design applications for scalability

When developing software, the patterns and architectures you choose are influenced by many requirements. For Kubernetes, one of the most important features is that applications are required to have horizontal scaling capability – by adjusting the number of application copies to share the load and improve availability. This is different from vertical scaling, which tries to deploy applications to stronger or weaker servers using the same parameters.

For example, microservices architecture is a software design pattern suitable for running multiple scalable applications in a cluster. Developers create simple, composable applications that communicate over the network through well-defined REST interfaces, rather than through in-program mechanisms like more complex monolithic applications. By splitting the monolithic application into multiple independent functional components, we can extend each functional component independently. A lot of the composition and complexity that used to exist in the application layer has been transferred to the operations domain, which is just what platforms like Kubernetes can do.

Going further than specific software models, cloud Native apps are designed with some additional considerations. Cloud native applications are applications that follow the microservices architecture pattern, with built-in recoverability, observability, and manageability, tailored to the environment provided by the clustering platform.

For example, cloud native applications are created with health metrics that allow the platform to manage the life cycle of an instance when it becomes unhealthy. These metrics generate (and can be exported) stable remote data that alerts operations personnel and allows them to make decisions based on this data. Applications are designed to cope with routine restarts, failures, back-end availability changes, and high loads without corrupting data or becoming unresponsive.

Apply theory in accordance with the rule of 12

When creating web applications ready to run in the cloud, there is a popular methodology to help you focus on the most important features: the Twelve-Factor App. It was originally written to help developers and operations teams understand the common core characteristics of all Web services designed to run in the cloud, and it works well for applications that will run in a clustered environment such as Kubernetes. While monolithic applications can benefit from these recommendations, microservice architecture applications designed around these principles will also work very well.

A brief summary of the “Rule of 12” :

  1. Codebase: Put all your code in a version control system (such as Git or Mercurial). What is deployed is entirely determined by the benchmark code.
  2. Dependencies: Application Dependencies should all be managed explicitly by the benchmark code, either by vendor (stored together with the application code) or by dependency profiles that are parsed and installed by package management software.
  3. Config: Separate application configuration parameters from the application itself. Configuration should be defined in the deployment environment, not embedded in the application itself.
  4. Backing Services: Dependent Services, both local and remote, should be abstracted as resources accessible over the network, and connection details should be defined in the configuration.
  5. Build, Release, Run: The Build phase of your application should be completely separate from the release, operation and maintenance phases. The build phase creates an executable from the application source, the release phase combines the executable with the configuration, and then executes the release at run time.
  6. Processes: Applications should be implemented by Processes that do not depend on any local state store. State should be stored in the back-end service described in Rule 4.
  7. Port binding: Applications should natively bind ports and listen for connections. All routing and request forwarding should be handled externally.
  8. Concurrency: An application should rely on process model extensions. Simply by running multiple applications at the same time, possibly on different servers, you can achieve the goal of not adjusting the application code extension.
  9. Disposability: Processes should be able to be quickly started and gracefully shut down without causing any serious side effects.
  10. Dev/ Prod Parity: Your testing, pre-release, and online environments should be as consistent and synchronized as possible. Differences between environments can cause compatibility problems and untested configurations to crop up.
  11. Logs: The application should output the Logs to stDout and let external services decide the best way to handle them.
  12. Admin Processes: One-off administrative processes should be distributed with the main process code and run on a particular release.

Following the guidelines provided in the “Rule of 12”, you can create and run applications using models that are perfectly suited to the Kubernetes runtime environment. The “Rule of 12” encourages developers to focus on their application’s primary responsibilities, consider operational conditions and interface design between components, and use input, output, and standard process management capabilities to get the application running in Kubernetes in a predictable way.

Containerize application components

Kubernetes uses containers to run isolated packaged applications on cluster nodes. To run on Kubernetes, your application must be wrapped in one or more container images and executed using a container runtime like Docker. Although containerizing your components is one of Kubernetes’ requirements, this process also helps enforce many of the principles of the “Rule of 12 applications” just discussed, making it easy to extend and manage applications.

For example, containers provide isolation between the application environment and the external host environment, provide a network-based, service-oriented way of communicating between applications, and typically read configurations from environment variables and write logs to standard output and standard error output. The container itself encourages a process-based concurrency strategy, and can help keep development/online environments consistent by maintaining independent extensibility and bundling runtime environments * (#10 Dev/prod parity) *. These features allow you to package your application to run smoothly on Kubernetes.

Container optimization criterion

Because of the flexibility of container technology, there are many different ways to encapsulate applications. But in the Kubernetes environment, some of them work better than others.

Image building is the process by which you define how your application will be set up and run in a container. Most of the best practices for “how to container an application” are related to the image building process. In general, there are many benefits to keeping mirror sizes small and composable. By keeping the build steps manageable and reusing the existing image layers when the image is upgraded, the optimized image can reduce the time and resources required to start a new container in the cluster.

When building container images, doing your best to separate the build steps from the images that will eventually run in production is a good start. Building software often requires additional tools, takes more time, and produces content that behaves differently in different containers or is not needed at all in the final run time environment. One way to clearly separate the build process from the runtime environment is to use Docker’s “multi-stage Builds” feature. The multi-phase build configuration allows you to set up different base images for the build and run phases. That is, you can build software using an image with all the build tools installed, and then copy the resulting executable package into a stripped-down image that will be used every time after.

With this kind of functionality, it’s usually a good idea to build a production environment image based on a minimal parent image. If you want to completely avoid the bloat caused by a “Linux distribution” style parent image such as Ubuntu :16.04 (which includes a full Ubuntu 16.04 environment), you can try building your image with scratch – Docker’s minimal base image -. However, the Scratch base image is missing some of the core tools, so some software might not run due to environmental issues. Another option is to use the Alpine image for Alpine Linux, which provides a lightweight but full-featured Linux distribution. It has gained widespread use as a stable minimal base environment.

For an interpreted programming language like Python or Ruby, the above example changes slightly. Because they don’t have a “compile” phase, and you definitely need an interpreter to run your code in production. But because people are still looking for leaner images, Docker Hub has plenty of optimized versions of each language built on Alpine Linux. For interpreted languages, the benefit of using smaller images is similar to that of compiled languages: Kubernetes can quickly pull all the necessary container images from the new node before the official work starts.

Choose between Pod and Container

While your apps must be containerized to run on Kubernetes, Pods * * is the smallest abstract unit that Kubernetes can manage directly. A POD is a Kubernetes object composed of one or more tightly related containers. All containers in the same POD share the same lifecycle and are managed as a separate unit. For example, these containers are always scheduled to the same node, started or stopped together, and share resources such as IP and file systems.

At first, it can be difficult to figure out the best way to split your app into Pods and containers. So it’s important to understand how Kubernetes handles these objects and what each abstraction layer brings to your system. The following tips will help you find some natural boundary points when encapsulating your application with these abstractions.

Looking for natural development boundaries is one way to determine the effective range for your container. If your system has a microservice architecture, all containers are well designed, built frequently, each responsible for different independent functions, and can be used frequently in different scenarios. This level of abstraction allows your team to publish changes through the container image and then publish the new functionality to all environments that use the image. Applications can be built by combining many containers, each of which performs a specific function, but not by itself.

In contrast, pods are often used when considering which parts of the system would benefit most from independent management. Kubernetes uses Pods as its minimal user-facing abstraction, so they are the most native units where Kubernetes apis and tools can interact and control directly. You can start, stop, or restart Pods, or use higher-level abstractions built on Pods to introduce features like replica sets and lifecycle management. Kubernetes does not allow you to manage different containers in a Pod individually, so if certain containers benefit from independent management, you should not group them together.

Because many of Kubernetes’ features and abstractions work directly with Pods, it makes sense to bundle things that should be scaled together into one pod, and things that should be scaled separately into different pods. For example, putting front-end Web servers and application services in different Pods allows you to scale each layer individually according to your needs. However, it sometimes makes sense to put the Web server and database adaptation layers in the same POD, if that adapter provides the basic functionality that the Web server needs to function properly.

Enhance Pod functionality by tying it to a supporting container

With that in mind, what types of containers should be bundled into the same POD? Typically, the main container in a POD is responsible for providing the core functionality of the POD, but additional containers can be defined to modify or extend that main container, or to help it adapt to a particular deployment environment.

For example, in a Web server POD, there might be an Nginx container that listens for requests and hosts static content that another container listens for project changes and updates. While the idea of packaging the two components into the same container sounds good, implementing them as separate containers has many advantages. Nginx containers and content pull containers can be used independently in different scenarios. They can be maintained and developed separately by different teams to generalize behavior to work with different containers.

Brendan Burns and David Oppenheimer, in their paper “Container-Based Design Patterns for Distributed Systems”, define three main patterns for packaging supporting containers. They represent some of the most common use cases for packaging containers into pods:

  • Sidecar (Sidecar mode) : In this mode, the secondary container extends and enhances the core functionality of the primary container. This pattern involves performing non-standard or utility functions in a separate container. For example, a container that forwards logs or listens for configuration changes can extend the functionality of a POD without changing its primary concern.

  • Ambassador Pattern: The Ambassador pattern uses a supporting container to abstract remote resources for the host container. The main container connects directly to the Ambassador container, which in turn connects to a potentially complex external resource pool — say, a distributed Redis cluster — and performs resource abstraction. The master container can complete the wiring of external services without knowing or caring about their actual deployment environment.

  • Adaptor: The Adaptor pattern is used to translate data, protocols, or interfaces used by the host container to align with standards expected by external users. Adaptor containers can also unify access points to central services, even if the users they serve originally support only incompatible interface specifications.

Use Configmaps and Secrets to save the configuration

Although application configurations can be packaged together into a container image, having your components configurable at run time enables better multi-environment deployment and provides more administrative flexibility. To manage configuration parameters at runtime, Kubernetes provides two objects: ConfigMaps and Secrets.

ConfigMaps is a mechanism for storing data that can be exposed to Pods and other objects at runtime. The data stored in ConfigMaps can be used with environment variables or mounted to pods as files. By designing your application to read configuration from these locations, you can use ConfigMaps to inject configuration at application runtime and modify component behavior without rebuilding the entire container image.

Secrets is a similar Kubernetes object type, which is used to securely store sensitive data and selectively allow Pods and other components to access it as needed. Secrets is a convenient way to deliver sensitive content to your app without having to store it in plain text in an easily accessible place like a normal configuration. Functionally, they work almost exactly like ConfigMaps, so an application can get data from both in exactly the same way.

ConfigMaps and Secrets can help you avoid putting configuration content directly into Kubernetes object definitions. You can map only the key names of the configuration and not the values, which allows you to dynamically update the configuration by modifying CongfigMap or Secret. This allows you to modify the runtime behavior of online pods and other Kubernetes objects without changing the definition of those resources themselves.

Implement Readiness and Liveness probes

Kubernetes includes a number of out-of-the-box features for managing the component lifecycle to ensure that your application is always healthy and available. However, to take advantage of these features, Kubernetes must understand how it should monitor and interpret your application health. To do this, Kubernetes allows you to define Readiness probes versus Liveness probes.

The “survival probe” allows Kubernetes to determine whether an application in a container is alive and running. Kubernetes can periodically execute commands within the container to check basic application behavior, or it can send HTTP/TCP network requests to specific addresses to determine whether the process is available and whether the response is as expected. If one of the “survival probe Pointers” fails, Kubernetes will restart the container to try to restore the functionality of the entire POD.

A similar tool, the Ready probe, is used to determine whether a Pod is ready to receive request traffic. Before container applications are fully ready to accept client requests, they may need to perform some initialization or reload the process when they receive a new configuration. When a Ready probe fails, Kubernetes pauses sending requests to the Pod rather than restarting it. This allows the Pod to complete its own initialization or maintenance tasks without affecting the overall health of the entire group.

By using a combination of survival probes and readiness probes, you can control Kubernetes to automatically restart pods or remove them from the back-end service group. By configuring the infrastructure to take advantage of these features, you can let Kubernetes manage the availability and health of your applications without performing additional operations.

Use Deployments to manage scalability and availability

When we discussed Pod design fundamentals earlier, we mentioned that other Kubernetes objects would build on Pod to provide more advanced functionality. The Deployment composite object is probably the most defined and manipulated Kubernetes object.

Deployments is a composite object that provides additional functionality by building on other Kubernetes base objects. They add life-cycle management capabilities to a class of intermediate objects called Replicasets, such as the ability to perform “Rolling updates,” roll back to older versions, and transition between states. These Replicasets allow you to define pod templates and quickly pull and manage multiple copies based on that template. This helps you easily scale your infrastructure, manage availability requirements, and automatically restart your Pods in the event of a failure.

These additional features provide a management framework and self-healing capabilities for the relatively simple POD abstraction. Although the working set you define is ultimately hosted by pods units, they are not usually the most configurable and managed units you should have. Instead, when Pods are configured by higher-level objects like deployments, they should be considered as the basic building blocks for a stable running application.

Create Services and Ingress rules to manage access to the application layer

Deployment allows you to configure and manage a collection of interchangeable PODS to scale the application and meet user needs. However, routing request traffic to these Pods is an exception. The network address previously assigned to the running group changes as pods are swapped out, rebooted, or transferred due to machine failure during a rolling upgrade. Kubernetes Services helps you manage this complexity by maintaining a dynamic PODS resource pool and managing access to various infrastructure layers.

In Kuberntes, services is the mechanism that controls how traffic is routed to batches of Pods. Whether forwarding traffic for external customers or managing connections between multiple internal components, Services allow you to control how traffic flows. Kubernetes then updates and maintains all the information necessary to forward the connection to the relevant Pods, even if the environment or network conditions change.

Access Services from within

To use services effectively, you should first identify the target audience for each set of Pods services. If your Service will only be used by other applications deployed in the same Kubernetes cluster, the ClusterIP type allows you to access a set of Pods using a fixed IP address that is routable only within the cluster. All objects deployed on the cluster can communicate with this set of POD replicas by sending requests directly to the Service IP address. This is the simplest type of service and is suitable for use in the internal application layer.

Kubernetes provides an optional DNS plug-in to provide name resolution services for services. This allows pods and other objects to communicate using domain names instead of IP addresses. This mechanism does not significantly change the use of services, but domain-based identifiers make it easier to connect components and define interactions between services without having to know the service IP address in advance.

Open Services to the Public network

If your application needs to be accessed from the public network, a “Load balancer” service is usually your best choice. It uses the application-specific cloud provider API to configure a load balancer that serves all traffic to the Service Pods through a public IP address. This approach provides a controlled network channel to the internal network of the cluster, thus directing external traffic to your Service Pods.

Because the “load balancer” type creates a load balancer for each service, exposing the Kubernetes service in this way can be a bit expensive. To help alleviate this problem, we can use the Kubernetes Ingress object to describe how to route different types of requests to different services based on a predetermined set of rules. For example, A request to “example.com” might be directed to Service A, while A request to “sammytheshark.com” might be routed to Service B. The Ingress object provides a way to describe how mixed request flows are individually routed to their target services based on predefined patterns.

Ingress rules must be resolved by an Ingress Controller, which is usually some kind of load balancer (such as Nginx) deployed in a cluster as a POD, It implements the Ingress rule and distributes traffic to Kubernetes serices based on the rule. Currently, the Ingress resource object definition is still in beta, but there are several working implementations on the market that can help cluster owners minimize the number of external load balancers they need to run.

Use declarative syntax to manage Kubernetes state

Kubernetes provides great flexibility in defining and managing resources deployed to a cluster. With tools like Kubectl, you can mandate one-time resources and quickly deploy them to a cluster. While this approach may be useful for quickly deploying resources during the Kubernetes learning phase, it also has a number of disadvantages and is not suitable for long cycle production environment management.

One of the biggest problems with imperative management is that it does not keep a record of changes you have deployed to the cluster. This makes it difficult, if not impossible, to recover from failures and track operations and maintenance changes within the system.

Fortunately, Kubernetes provides another declarative syntax that allows you to fully define resources using text files and then apply those configurations or changes using kubectl commands. Keeping these profiles in your version control system is an easy way to monitor changes and integrate them with the review process in other parts of your company. File-based management also makes it easy to adapt existing schemas to new resources by copying and then modifying existing resource definitions. Storing Kubernetes object definitions in versioned directories allows you to maintain snapshots of the expected cluster state at each node in time. This is invaluable when you need to failover, migrate, or track down some unexpected change in your system.

conclusion

Managing the infrastructure that runs your applications and learning how to best take advantage of the features these modern choreography systems offer can be daunting. However, the benefits of The Kubernetes system and container technology can only be realized when your development and operations processes are consistent with the building concepts of these tools. Following the patterns Kubernetes does best in structuring your system and understanding how specific features can mitigate the challenges of highly complex deployments can help improve your experience running the platform.