In this article, we offer five tips to help you migrate your project to Kubernetes, based on the experience of the OpenFaas community over the past 12 months. The following is compatible with Kubernetes 1.8 and has been applied to the Openfaas-Serverless Functions Made Simple practice.

Disclaimer Because Kubernetes API updates very frequently, please refer to the official documentation for the latest information.

##1. Put all content into Docker

The first step is to create a Dockerfile for each component running as a separate process, which may seem obvious. If you’ve already done this, you’re already one step ahead.

But if you haven’t already, make sure you’re using multi-phase builds for each of your components. A multi-stage build uses two Docker images: one for build time; One is the runtime. For example, the base image might be a Go SDK for compiling binaries, and the final stage would be a minimal Linux image like Alpine Linux. We copy the binaries to the final image, install packages like CA certificates, and set up entry-points. The image you end up with is small and does not contain unwanted packages.

Here’s an example: a multi-stage build of the OpenFaaS API Gateway component written by Go. You’ll notice that it includes some other practices: using a non-root runtime to name build-time phases like build-specific infrastructure, such as Linux using version tags, such as 3.6. If you use latest, it can lead to unpredictable situations.

Examples are as follows:

The FROM golang: 1.9.4 as build WORKDIR/go/src/github.com/openfaas/faas/gateway COPY.. RUN CGO_ENABLED = 0 GOOS = Linux go build-a- InstallSuffix cgo -o gateway. FROM Alpine :3.6 RUN addgroup -s app \ && adduser -s -g app app WORKDIR /home/app EXPOSE 8080 ENV http_proxy""
ENV https_proxy     ""COPY --from=build /go/src/github.com/openfaas/faas/gateway/gateway . COPY assets assets RUN chown -R app:app ./ USER app  CMD ["./gateway"]
Copy the code

Note: If you want to use OpenShift (a Kubernetes distribution), you must ensure that all your Docker images are running as non-root users. 1.1 get Kubernetes

You need to install Kubernetes on your laptop or development machine. You can read a blog post I wrote describing all the common options for running Docker and Kubernetes on a Mac.

If you’ve used Docker before, you’re probably familiar with the word container. In the Kubernetes vocabulary you will rarely manipulate containers directly, instead abstracting the concept of Pod.

A Pod is a group of one to multiple containers that are centrally scheduled and deployed to access each other through loopback interface 127.0.0.1.

Here’s an example of how Pod abstraction can be useful: let’s say you have a traditional application that doesn’t have TLS/SSL support. It can be deployed with a TLS configured Nginx or other Web server into a Pod. The advantage of this is that multiple containers can be deployed together to extend its functionality without disruptive changes. ##2. Create YAML files

After you have dockerFiles and images, the next step is to start writing YAML files in Kubernetes format. The cluster reads these files to deploy the application and maintain the status of your project.

This is different from Docker’s own Compose Files, which you might find difficult at first. My advice is to look for examples or other projects in the documentation and try to emulate their styles and methods. The good news is that it gets easier as you get more experience.

Each Docker image needs to be defined in a Deployment object, specifying the container to run and the resources it needs. A Deployment will create and maintain the Pod to run your code and restart it for you if the Pod already exists.

If you want HTTP/TCP access, you need to create a Service object for each component.

You can write multiple Kubernetes definitions to a file and separate the resources with a new line. But it is more common to write definitions to multiple files, each representing an API object in the cluster.

For example: gateway-svc.yml // represents a service gateway-dep // represents a deployment

If all the files are in one directory, you can apply them all with one command: kubectl apply-f./yaml/

When you need to run on another operating system or architecture (like Raspberry Pi), we recommend putting the file in a new directory, like yaml_ARM. An example of Deployment

Here is an example of Deployment to deploy NATS Streaming (a lightweight Streaming platform for distributing work) :

apiVersion: apps/v1beta1 kind: Deployment metadata: name: nats namespace: openfaas spec: replicas: 1 template: Metadata: Labels: app: Nats spec: containers: - name: Nats image: NATS - Streaming :0.6.0 imagePullPolicy: Always ports: - containerPort: 4222 protocol: TCP - containerPort: 8222 protocol: TCPcommand: ["/nats-streaming-server"]
    args:
      - --store
      - memory
      - --cluster_id
      - faas-cluster
Copy the code

A Deployment can also declare that multiple copies or instances of a Service (service) are created at startup.

The Service definition

apiVersion: v1
kind: Service
metadata:
name: nats
namespace: openfaas
labels:
app: nats
spec:
type: ClusterIP
ports:
- port: 4222
  protocol: TCP
  targetPort: 4222
selector:
app: nats
Copy the code

Services provide a mechanism to load balance requests across multiple copies of your Deployment. In the previous example we only had a single copy of NATS Streaming, but if we had multiple copies, each with a separate IP address, tracking them would become a problem. The advantage of using a Service is that it can have a static IP address and DNS entry through which any copy can be accessed at any time.

A Service does not map directly to Deployment; it maps to a label. In the example above, the Service looks for the tag app=nats. Tags can be added or removed from Deployment (or other API objects) at run time, making it fairly easy to redirect traffic in your cluster. These make it easy to enable A/B testing or rolling releases.

The best way to learn about Kubernetes-related YAML syntax is to check out the official documentation section on API objects, where you can find examples of YAML or Kubectl using them.

More object API documentation please check: https://kubernetes.io/docs/concepts/. 2.1 Helm

Helm says it is the package manager for Kubernetes. From my point of view it provides two main functions:

Distribute your app (in a Chart)

Once you are ready to distribute the YAML files for your project, you can package them and submit them to the Helm repository. So other people can find your app and install it with a single command. Chart itself can have version control and can also specify dependencies with other charts.

Here are three examples of Chart: OpenFaaS, Kakfa, and Minio.

Make editing easier

Helm supports built-in templates for the Go language, where you can put common configuration items into a file. So if you publish a new set of Docker images that need to be updated, you only need to make changes in one place. You can also write conditional statements so that using flag with the helm command enables different configuration items and features at deployment time.

In a normal Yaml file we define container images like this: image: functions/gateway:0.7.5

With the Helm template we do this: image: {{.values.images.gateway}}

We can then define the imags.gateway value in a separate file. Another thing Helm allows us to do is to use conditional judgements — useful when supporting multiple architectures or features.

Here is another example of how to choose between applying ClusterIP or NodePort, which are two different ways to expose a particular service in a cluster. NodePort exposes services outside the cluster, so you may need to control when you want this functionality.

If we use regular YAML files, that means we need two sets of configuration files:

spec:
type: {{ .Values.serviceType }}
ports:
- port: 8080
  protocol: TCP
  targetPort: 8080
  {{- if contains "NodePort" .Values.serviceType }}
  nodePort: 31112
  {{- end }} 
Copy the code

ServiceType can be ClusterIP or NodePort in this case. The following statement adds the NodePort element to YAML when the conditions are met. Use ConfigMaps # # 3)

In Kubernetes you can use ConfigMap to load configuration files into the cluster. The ConfigMap method is better than the bind Mounting method because the configuration file data is copied to the entire cluster, ensuring robustness. If data is mounted from a host using bind mount, you must place the data on that host and synchronize it. Both of these methods are better than typing the configuration file directly into the mirror, because it is not convenient to update the configuration file.

A ConfigMap can be called on demand from a Kubectl or YAML file. Once a ConfigMap is created in the cluster, it can be added to a container or Pod.

Here is an example of ConfigMap defined for Prometheus:

kind: ConfigMap
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus-config
namespace: openfaas
data:
prometheus.yml: |
scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']
Copy the code

You can load it into a Deployment or Pod:

volumeMounts:
    - mountPath: /etc/prometheus/prometheus.yml
      name: prometheus-config
      subPath: prometheus.yml
  volumes:
    - name: prometheus-config
      configMap:
        name: prometheus-config
        items:
          - key: prometheus.yml
            path: prometheus.yml
            mode: 0644
Copy the code

See ConfigMap Prometheus Config for a complete example.

More documentation to view: https://kubernetes.io/docs/tas… Gmap /. ##4. Use secure Secret

To keep your passwords, API keys, tokens, etc. private and secure, you need to use Kubernetes secret management mechanism.

If you are already familiar with ConfigMaps, the good news is that Secret is used in much the same way: define Secret in a cluster and mount it into a Deployment/Pod

Other secret types you might use when you need to pull images from a private Docker image repository. This is called ImagePullSecret, see here for more information.

About how to create and manage secret there is more information in the official documentation: https://kubernetes.io/docs/con… Cret /. ##5: Check the health checks

Kubernetes implements health checks through liVENESS and Readiness checks. We need to leverage these mechanisms to ensure self-healing and failure protection for our cluster. They work by using a probe to execute a command inside a Pod or invoke a predefined HTTP entry. Liveness

A LIVENESS check checks to see if the program is running. For OpenFaaS functions we create a/TMP /.lock file when function starts. If we find an unhealthy state, we delete the file and Kubernetes reschedule the function for us.

Another common approach is to add a new HTTP route like //healthz. It is traditional to use // because it does not conflict with other routes that already exist. Readiness

If you enable Readiness in Kubernetes, it will only forward traffic to containers that pass the test conditions.

Readiness checks can be performed periodically, unlike health-check. A container can be healthy even under high load — we define this as “Not ready” so that Kubernetes will stop forwarding traffic to it until it recovers.

Official document has more information on this: https://kubernetes.io/docs/tas… Obes /. conclusion

In this article, we listed some of the core tasks to do when migrating a project to Kubernetes. This includes:

  • Create the Docker image
  • Written Kubernetes manifest (YAML file)
  • Use ConfigMap to decouple configuration from code
  • Use Secret to protect private data such as API keys
  • Liveness and Readiness probes were used to achieve resilience and self-healing

Here you can read my Docker Swarm vs Kubernetes comparison and quick guide to building a cluster:

  • Kubernetes vs Docker/Swarm

Compare Docker Kubernetes and Swarm to get an overview of tools from CLI to network to components.

  • You instant Kubernetes cluster

If you want to run Kubernetes on a VM or cloud host, this is probably the fastest way to get a development cluster up and running.

Java architect learning public account!

A focus on sharing architecture dry goods wechat public number