Click on “Road of Technology for Migrant workers” and choose “Top or Star label”

10 o ‘clock every day for you to share different dry goods

In recent years, containers and Kubernetes have become a key technology trend for developers and enterprise users. This article summarizes ten important techniques for building and managing containers to optimize IT costs and increase efficiency.

The container is the core carrier for applications in Kubernetes. When creating Kubernetes workloads, such as rules for scheduling, scaling, or upgrading applications, you first need to create a container image that runs the service or Kubernetes workload. After testing the image and integrating it with the rest of the application code, users typically push the image to the container registry. But before pushing, there are still plenty of practical tips to help build and manage containers.

The body of the

With Kubernetes, users can automatically scale their business with little or no downtime, optimizing IT costs and improving system reliability.

1. Use the latest Kubernetes mode

As Kubernetes continues to introduce new features, its application model is gradually changing. To ensure that the Kubernetes cluster follows the latest Kubernetes application mode, users need to check the official Kubernetes documentation and the release notes of each version regularly.

2. Reuse base images to save time

To create application containers in a Kubernetes cluster, users need to build a Docker base image, and then build some or all of the application containers on this image. Many applications share dependencies, libraries, and configurations, so shared portions can be configured at the base mirroring completion for reuse.

The Docker Hub and Google Container registries have thousands of base images available for download, pre-configured and ready to use, which can save a lot of time.

3. Don’t trust any mirror image

While it’s convenient to use a pre-built image, be extra careful and make sure you run a specific vulnerability scan on it.

Some developers grab a base image created by another user from the Docker Hub and push the container into production, all because at first glance the image contains the desired package.

There are many errors: the version of the code in the mirror may not be correct; The code may be buggy; Or worse, the project may have been intentionally bound with malware.

To mitigate the above problems, users can run static analysis through Snyder or Twistlock and then integrate it into a CI/CD (continuous integration and continuous delivery) pipeline to scan for all container vulnerabilities.

In general, once a bug is found in the underlying image, users should rebuild the entire image, not just fix the bug. Containers should be immutable, so patches need to be introduced to rebuild and deploy the image.

4. Optimize base mirroring

Start with the leanest, most viable base image, and build your package from there. In this way, you can know exactly what is in the container.

Smaller base mirrors can also reduce overhead. For example, if you add an existing Node.js image and then install all the libraries, the entire image will probably be 600MB, but you don’t really need those extra libraries.

Therefore, keeping the mirror as lean as possible enables:

  • Build faster

  • Less storage space

  • Mirror pull is faster

  • The potential attack surface is smaller

5. Ensure that the container runs only one process

Similar to keeping the base image minimal, ensure that there is only one process per container. The lifecycle of a container is the same as the application it hosts, which means that each container should contain only one parent process.

According to Google Cloud, treating a container as a virtual machine and running multiple processes at the same time is a common mistake. While containers can do this, they cannot use the self-healing properties of Kubernetes.

In general, containers and applications should start at the same time; Similarly, when the application stops, the container should also stop. If you have more than one process in a container, it is possible to have mixed application states, which will cause Kubernetes to be unable to determine whether a container is healthy or not.

6. Handle Linux signals correctly

Containers control the life cycle of their internal processes through Linux signals. To tie the application lifecycle to the container, you need to make sure your application handles Linux signals correctly.

The Linux kernel uses signals such as SIGTERM, SIGKILL, and SIGINIT to terminate processes. However, Linux in the container executes these common signals in a different way, and if the result does not match the signal default, errors and interrupts will occur.

Creating a dedicated init system, such as the Linux Tini system for containers, can help solve this problem. This tool registers signal handlers (such as PID) correctly, and the container application can execute Linux signals correctly to shut down isolated and zombie processes and complete memory reclaim.

7. Make full use of Docker’s cache construction mechanism

Container images consist of a series of image layers that are generated through instructions in templates or dockerfiles. These layers and the build order are typically cached by the container platform. Docker, for example, has a build cache that can be reused by different layers. This cache can make builds faster, but make sure that all the parent nodes of the current layer have the build caches and that they have not been changed. Simply put, you need to put the unchanging layers in front and the frequently changing layers in back.

For example, if you have a build file that contains steps X, Y, and Z and changes are made to step Z, the build file can reuse steps X and Y in the cache because these layers existed before the change to Z, which speeds up the build process. However, if step X is changed, the layers in the cache cannot be reused.

While this is a convenient behavior and saves time, you must ensure that all mirroring layers are up to date and not built from old, outdated caches.

8. Use a package manager similar to Helm

Helm acts as the unofficial package manager for Kubernetes, helping to install and update co-loads and containers running in the cluster. Helm can use Chart to declare custom application dependencies and provide rolling upgrade and rollback tools.

Users can provide common services, such as databases or Web services, to the Kubernetes cluster from existing base images; Custom base images can also be created for internal applications, and custom Charts can be created to simplify deployment and reduce the workload and rework of the development team.

9. Use tags and semantic version numbers

As a general rule, users should not use the: Latest flag. To most developers, this is obvious. If you do not add custom labels to the container, it will try to pull the latest version from the mirror repository, which may not include the required changes.

Changes to the Docker container are tracked using image labels and semantic version numbers when creating custom images. When they run in a Kubernetes cluster, Kubernetes determines which version to run through the image label. When choosing the Docker image version mechanism, we should consider both production load and development process, so as to achieve better results in Kubernetes.

10, safety

In many cases, when Docker images are built, applications in the container need to access sensitive data, such as API tokens, private keys and database connection strings.

Embedding this secret information into a container is not a secure solution, even if it is only stored in a private container image. Processing unencrypted private data as part of a Docker image faces numerous additional security risks, including network and image registry security, and the Docker architecture itself makes it impossible to optimize unencrypted sensitive data in containers.

Instead, users can store private information outside the container by using Kubernetes Secrets objects, which is simpler and safer.

Kubernetes provides a Secrets abstraction that allows private data to be stored outside the Docker image or Pod definition. Users can load this information into containers by mounting volumes or environment variables. When updating, simply replace the Pod of the service and use a new certificate. Users can also save private data via Hashicorp Vault and Bitnami Sealed Secrets.

The original link: https://www.jianshu.com/p/3870b815a966

– MORE excellent articles – |

  • Operation and maintenance engineers to fight strange upgrade road V2.0

  • High salary, helpless pain, this is the current situation of Chinese programmers?

  • Learn Docker, the 11 most common mistakes beginners make!

  • SegmentFault stands out against CSDN download sites

  • Pay list: Don’t eat or drink a year still can’t buy 3 square meters of room?

  • What problems did Redis solve when it became so popular?

  • The right posture for server Performance optimization (Good article recommended)

If you enjoyed this article

Please click the QR code to pay attention to the road of technology

Scan code to pay attention to the public number, reply to the “directory” can view the public number of articles catalog, reply to the “group” can join the reader technical exchange group, communicate with you together.

All the best of the official account is here

For those of you watching, please click here ↓↓