In previous articles, we looked at the basic concepts in Kubernetes, its hardware architecture, its different software components (such as Pod, Deployment, StatefulSet, Services, Ingress, and Persistent Volumes), You learned how to communicate externally between services.

In this article, we will learn:

  1. Create the NodeJS backend using the MongoDB database

  2. Write a Dockerfile to container our application

  3. Create a Kubernetes Deployment script to start the Pod

  4. Create a Kubernetes Service script to define communication connections between the container and the outside world

  5. Deploy the Ingress Controller to request a route

  6. Write Kubernetes Ingress scripts to define communication with the outside world.

Because our code can be redirected from one node to another (for example, one node doesn’t have enough memory, so work is rescheduled to another node with enough memory), data stored on the node is vulnerable to loss, meaning MongoDB data is unstable. In the next article, we’ll discuss data persistence and how to securely store our persistent data using Kubernetes persistent volumes.

In this article, we will use NGINX as the Ingress Controller and Azure container image repository to store our custom Docker images. All scripts written in this article are available in Stupid Simple Kubernetes Git repo, which you can access if needed:

http://GitHub – CzakoZoltan08/StupidSimpleKubernetes-AKS

Note: These scripts are not platform specific, so you can practice this tutorial using other types of cloud providers or local clusters with K3s. I recommend using K3s because it is very lightweight and all dependencies are packaged in a single binary of less than 100MB. More importantly, it is a highly available, CNCF-certified Kubernetes distribution designed for production workloads in resource-constrained environments. For more information, you can visit the official documentation:

docs.rancher.cn/k3s/

preparation

Before starting this tutorial, make sure you have Docker installed. Also install Kubectl.

Kubectl installation link:

Kubernetes. IO/docs/tasks /…

In this tutorial use Kubectl command can be Kubectl cheat sheet (kubernetes. IO/docs/refere…) Found.

In this tutorial, we will use Visual Studio Code, but this is not necessary, you can also use other editors.

Create a production-ready microservice architecture

Containerize the application

The first step is to create the Docker image on the NodeJS backend. Once the image is created, we push it to the container image repository, where it can be accessed and pulled through the Kubernetes Service (in this case, Azure Kubernetes Service).

The Docker file for NodeJS: FROM node:13.10.1 WORKDIR /usr/src/app COPY package*.json./ RUN NPM install # Bundle app source COPY.. EXPOSE 3000 CMD [ "node", "index.js" ]Copy the code

In the first line, we need to define in terms of the mirror to be created for the back-end service. In this case, we will use the official node image in Docker Hub version 13.10.1.

In line 3, we create a directory to store the application code in the image. This will be the working directory for your application.

The image already has Node.js and NPM installed, so next we need to install your application dependencies using the NPM command.

Note that to install the necessary dependencies, instead of copying the entire directory, we just copy package.json, which allows us to take advantage of the Docker layer of caching.

For more information about efficient Dockerfiles, visit the following link:

Bitjudo.com/blog/2014/0…

In line 9, we copy the source code to the working directory, and in line 11, we expose it on port 3000 (you can choose another port if you want, but be sure to synchronize the changes to the Kubernetes Service script).

Finally, on line 13, we define the command to run the application (inside the Docker container). Note that there should be only one CMD directive per Dockerfile. If there are more than one, only the last one takes effect.

Now that we have defined the Dockerfile, we will build the image from the Dockerfile using the following Docker command (using Terminal with Visual Studio Code or using CMD on Windows) :

docker build -t node-user-service:dev .
Copy the code

Notice the little dot at the end of the Docker command, which means we are building the image from the current directory, so make sure you are in the same folder as the Dockerfile (in this case, the root folder of the repo).

To run the image locally, we can use the following command:

docker run -p 3000:3000 node-user-service:dev  
Copy the code

To push this image to our Azure container Image repository, we must mark it with the following format: /: :, in this case:

docker tag node-user-service:dev stupidsimplekubernetescontainerregistry.azurecr.io/node-user-service:dev
Copy the code

The final step is to push it into our container image repository using the following Docker command:

docker push stupidsimplekubernetescontainerregistry.azurecr.io/node-user-service:dev
Copy the code

Create the Pod using the deployment script

NodeJs backend

Next, define the Kubernetes Deployment script, which will automatically manage the Pod for us.

apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: node-user-service-deployment  
spec:  
  selector:  
    matchLabels:  
      app: node-user-service-pod  
  replicas: 3  
  template:  
    metadata:  
      labels:  
        app: node-user-service-pod  
    spec:  
      containers:  
        - name: node-user-service-container  
          image: stupidsimplekubernetescontainerregistry.azurecr.io/node-user-service:dev  
          resources:  
            limits:  
              memory: "256Mi"  
              cpu: "500m"  
          imagePullPolicy: Always  
          ports:  
            - containerPort: 3000
Copy the code

The Kubernetes API can query and manipulate the state of objects in the Kubernetes cluster (for example, Pod, namespace, ConfigMap, and so on). As specified in the first line, the current stable version of this API is 1.

In each kubernetes.yml script, we must define the Kubernetes resource type (Pods, Deployments, Service, etc.) using the kind keyword. So, as you can see, we define in line 2 that we want to use the Deployment resource.

Kubernetes allows you to add some metadata to a resource. This way, you can more easily identify, filter, and reference resources.

In line 5, we define the specification for this resource. In line 8, we specify that this Deployment should only apply to the resource labeled APP: Node-user-service-POD, and in line 9 you can see that we want to create three copies of the same POD.

Template (starting at line 10) defines a Pod. Here we add the tag app: Node-user-service-pod to each pod. This way, Deployment will recognize them. In lines 16 and 17, we define which Docker container should be run inside the POD. As you can see in line 17, we will use the Docker image from the Azure Container image repository that was built and pushed in the previous section.

We can also define resource limits for pods to avoid insufficient Pod resources (when one Pod uses all the resources and the other pods cannot use them). In addition, when you specify a resource request for a container in a Pod, the scheduler uses this information to decide which node to place the Pod on. When you specify resource limits for containers, Kubelet enforces these limits so that running containers are not allowed to use more than the resource limits you set. Kubelet also retains at least the amount of “requests” for the system resources. Note that if you do not have sufficient hardware resources (such as CPU or memory), you will never be able to schedule a POD.

The last step is to define ports for communication. In this case, we use port 3000. This port number should be the same as the one exposed in the Dockerfile.

MongoDB

The Deployment script for the MongoDB database is very similar. The only difference is that we must specify volume mount (the data will be saved to a folder on the node).

apiVersion: apps/v1 kind: Deployment metadata: name: user-db-deployment spec: selector: matchLabels: app: user-db-app replicas: 1 template: metadata: labels: app: user-db-app spec: containers: - name: mongo image: 3.6.4 command: -mongod - "--bind_ip_all" - "--directoryperdb" Ports: -containerPort: 27017 volumeMounts: - name: data mountPath: /data/db resources: limits: memory: "256Mi" cpu: "500m" volumes: - name: data persistentVolumeClaim: claimName: static-persistence-volume-claim-mongoCopy the code

In this case, we use the official MongoDB image directly from DockerHub (line 17). Volume installation is defined in line 24. We’ll explain the last four lines in the next article when discussing Kubernetes persistent volumes.

Create services for network access

We have now started the Pod and started defining the communication between containers and the outside world. To do this, we need to define a service. The relationship between Service and Deployment is one-to-one, so we should have one Service for each Deployment. Deployment also manages the Pod lifecycle and is responsible for monitoring them, while Service is responsible for enabling network access to a set of Pods.

apiVersion: v1  
kind: Service  
metadata:  
  name: node-user-service  
spec:  
  type: ClusterIP  
  selector:  
    app: node-user-service-pod  
  ports:  
    - port: 3000  
      targetPort: 3000
Copy the code

The important part of the.yML script is the selector, which defines how to identify the Pod (created by Deployment) to be referenced from this Service. In line 8 we can see that the Selector is app:node-user-service-pod, because the pod in Deployment defined earlier is marked like this. Another important thing is to define the mapping between container ports and service ports. In this case, incoming requests will use the 3000 ports defined in line 10 and route them to the port defined in line 11.

The Kubernetes Service script for MongoDB Pod is very similar. We just need to update the Selector and the port.

apiVersion: v1  
kind: Service  
metadata:  
  name: user-db-service  
spec:  
  clusterIP: None  
  selector:  
    app: user-db-app  
  ports:  
    - port: 27017  
      targetPort: 27017
Copy the code

Configuring External Traffic

To communicate with the outside world, we need to define an Ingress Controller and specify routing rules using the Ingress Kubernetes resource.

To configure the NGINX Ingress Controller, we will use a script that can be found in the following link:

Github.com/CzakoZoltan…

This is a generic script that can be applied without modification (a detailed explanation of the NGINX Ingress Controller is beyond the scope of this article).

The next step is to define a “load balancer” that will be used to route external traffic using public IP addresses (the cloud provider provides the load balancer).

kind: Service  
apiVersion: v1  
metadata:  
  name: ingress-nginx  
  namespace: ingress-nginx  
  labels:  
    app.kubernetes.io/name: ingress-nginx  
    app.kubernetes.io/part-of: ingress-nginx  
spec:  
  externalTrafficPolicy: Local  
  type: LoadBalancer  
  selector:  
    app.kubernetes.io/name: ingress-nginx  
    app.kubernetes.io/part-of: ingress-nginx  
  ports:  
    - name: http  
      port: 80  
      targetPort: http  
    - name: https  
      port: 443  
      targetPort: https
Copy the code

Now that we have the Ingress Controller and load balancer up and running, we can define the Ingress Kubernetes resource to specify routing rules.

apiVersion: extensions/v1beta1  
kind: Ingress  
metadata:  
  name: node-user-service-ingress  
  annotations:  
    kubernetes.io/ingress.class: "nginx"  
    nginx.ingress.kubernetes.io/rewrite-target: /$2  
spec:  
  rules:  
    - host: stupid-simple-kubernetes.eastus2.cloudapp.azure.com  
      http:  
        paths:  
          - backend:  
              serviceName: node-user-service  
              servicePort: 3000  
            path: /user-api(/|$)(.*)  
          # - backend:  
          #     serviceName: nestjs-i-consultant-service  
          #     servicePort: 3001  
          #   path: /i-consultant-api(/|$)(.*)
Copy the code

In line 6, we define the Ingress Controller type (this is a predefined value from Kubernetes; Kubernetes currently supports and maintains GCE and Nginx controllers.

In line 7, we define the rewrite target rule, and in line 10, we define the host name.

For each service that should be accessed externally, we should add an entry to the path list (starting at line 13). In this example, we have added only one entry for the NodeJS user service back end, which is accessible through port 3000. / user – API uniquely identifies our service, so any stupid – simple – kubernetes. Eastus2. Cloudapp azure.com/user-api start NodeJS request will be routed to this backend. If you want to add additional services, you must update this script (see commented out code).

Apply.yML scripts

To apply these scripts, we will use Kubectl. The kubectl command for the application file is as follows:

kubectl apply -f
Copy the code

In this example, if you are in the root folder of Stupid Simple Kubernetes repo, you need to execute the following command:

kubectl apply -f .\manifest\kubernetes\deployment.yml  
kubectl apply -f .\manifest\kubernetes\service.yml  
kubectl apply -f .\manifest\kubernetes\ingress.yml  
kubectl apply -f .\manifest\ingress-controller\nginx-ingress-controller-deployment.yml  
kubectl apply -f .\manifest\ingress-controller\ngnix-load-balancer-setup.yml  
Copy the code

After applying these scripts, we are ready to call the back end externally (using Postman, for example).

conclusion

In this tutorial, we learned how to create various resources in Kubernetes, such as Pod, Deployment, Services, Ingress, and Ingress Controller. We created a NodeJS backend using the MongoDB database and containerized and deployed NodeJS and MongoDB containers using three copies of pods.

In the next article, we’ll look at persistent data issues and introduce persistent volumes in Kubernetes.

Author’s brief introduction

Czako Zoltan Czako Zoltan is an experienced full stack developer with extensive experience in front end, back end, DevOps, Internet of Things and artificial intelligence.