Using Kubernetes from 0 to 1 (part 3) : In this article, we will introduce how to use Kubernetes to deploy a Nginx and use Pod IP, Service IP, and Ingress to access Nginx.

Traditional Kubernetes application construction

Create a Namespace

Multiple namespaces can be created in a Kubernetes cluster for “environment isolation”. When there are many projects and people, you can consider dividing namespaces according to the actual situation of the project (such as production, testing, development).

Create a Namespace named “nginx” :

[root@localhost~]# kubectl create ns nginx

namespace "nginx" created
Copy the code

View the Namespace that has been created in the cluster:

[root@localhost~]# kubectl get ns

NAME          STATUS   AGE
default       Active   35d
kube-public   Active   35d
kube-system   Active   35d
nginx         Active   19s
Copy the code

Create a Deployment

Deployment provides declarative updates to PODS and Replica Sets (next generation Replication Controllers). Simply describe what the desired target state is in the Deployment and the Deployment Controller will help the developer change the actual state of the Pod and ReplicaSet to the target state. Developers can define a new Deployment to create ReplicaSet or delete the existing Deployment and create a new one to replace it. Using Deployment makes it easier to manage PODS, including scaling, scaling, pausing, rolling updates, rolling back, and so on. Represent Deployment in Choerodon as an instance, while supporting multiple functions such as online upgrade, stop, delete, etc.

Typical application scenarios include:

  • Define Deployment to create Pod and ReplicaSet
  • Rolling updates and rolling back applications
  • Capacity expansion and reduction
  • Suspend and continue Deployment

Write a file named dp.yaml with the following contents:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13.5-alpine
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /
            port: 80
Copy the code

After saving, use kubectl command to deploy:

[root@localhost~]# kubectl apply -f dp.yaml

deployment.apps"nginx-deployment"created
Copy the code

To view the deployed Deployment, execute the following command:

[root@localhost~]# kubectl get deployment -n nginx

ME                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
inx-deployment     1         1         1            1           51s
Copy the code

To view the Pod created by Deployment, run the following command:

[root@localhost~]# kubectl get pod -n nginx -o wide

NAME                                 READY    STATUS    RESTARTS   AGE     IP                 NODE
nginx-deployment-866d7c64c7-8rnd5    1/1      Running   0          3m      10.233.68.248      clusternode11
Copy the code

If the Pod status is Running, the cluster can be accessed using Pod IP:

[root@localhost~]# curl 10.233.68.248<! DOCTYPE html> <html> <head> <title>Welcome to nginx! </title> <style> body{ width:35em; margin:0 auto; font-family:Tahoma,Verdana,Arial,sans-serif; } </style> </head> <body> <h1>Welcome to nginx! <h1> <p>If you see this page,the nginx web server is successfully installed and working. Further configuration is required.</P> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for uesing nginx.</em></p>
</body>
</html>
Copy the code

For more information about the Deployment please reference here: kubernetes. IO/docs/concep…

Create a Service

Kubernetes Pods have a life cycle. They can be created or destroyed, but once destroyed their life ends forever. Pod can be created and destroyed dynamically through Deployment. Each Pod gets its own IP address, but these IP addresses are not fixed and are reclaimed when the Pod is destroyed. This leads to a problem: in a Kubernetes cluster, if a set of pods (called Backend) provides services for other pods (called Frontend), how do those Frontend discover and connect to which backends in this set of pods?

Service is called to solve this problem.

Write a file named svc.yaml with the following contents:

apiVersion: v1
kind: Service
metadata:
  namespace: nginx
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
Copy the code

Deploy with kubectl command:

[root@localhost~]# kubectl apply -f svc.yaml

service "nginx-service"created
Copy the code

To view the deployed Service, run the following command:

[root@localhost~]# kubectl get svc -n nginxNAME TYPE cluster-ip external-ip PORT(S) AGE Nginx-service ClusterIP 10.233.47.128 < None > 80/TCP 56sCopy the code

The Labels of Pod are mapped to the selectors in the Service. The Labels of Pod are mapped to the selectors in the Service. Nginx can now be accessed from within the cluster using this Service. Choerodon provides visual creation of services to facilitate network creation:

[root@localhost~]# curl 10.233.47.128<! DOCTYPE html> <html> <head> <title>Welcome to nginx! </title> <style> body{ width:35em; margin:0 auto; font-family:Tahoma,Verdana,Arial,sans-serif; } </style> </head> <body> <h1>Welcome to nginx! <h1> <p>If you see this page,the nginx web server is successfully installed and working. Further configuration is required.</P> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for uesing nginx.</em></p>
</body>
</html>
Copy the code

For more information about the Service please reference here: kubernetes. IO/docs/concep…

Create the Ingress

At this point, Nginx can only be accessed by hosts inside the cluster and on the host. To make Nginx accessible to hosts outside the node, you need to create an Ingress. Ingress provides the URL, load balancing, SSL termination, and HTTP routing functions for services to access external clusters. Ingress corresponds to a domain name in Choerodon. In addition to managing domain names, Choerodon also manages domain name certificates, supporting online application and import.

Write a file named ing. Yaml that looks like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: nginx
spec:
  rules:
  - host: nginx.example.local This domain name must be resolved to the load host IP address of the K8S cluster
    http:
      paths:
      - backend:
          serviceName: nginx-service
          servicePort: 80
        path: /
Copy the code

Deploy with kubectl command:

[root@localhost~]# kubectl apply -f ing.yaml

ingress.extensions "nginx-ingress" created
Copy the code

To view the deployed Ingress, run the following command:

[root@localhost~]# kubectl get ingress -n nginx
 
NAME             HOSTS                    ADDRESS    PORTS     AGE
nginx-ingress    nginx.example.local                 80        1m
Copy the code

At this point, Nginx can be accessed from the browser using the defined URL:

, please refer to here more about Ingress kubernetes. IO/docs/concep…

After looking at traditional Kubernetes application scaffolding, let’s take a look at how to do it in Choerodon.

Application construction based on Choerodon

A key deployment

Application deployment in Choerodon is straightforward. It builds a set of environments, instances, services, and domain objects to map Kubernetes base objects, providing a visual interface for creating and modifying Kubernetes base objects.

In Choerodon pig tooth fish platform to deploy an application need only in the “application management” page, click on the “create application”, and “development pipeline” create branches, submit code after application, choose the “deployment assembly line” page to deploy the application, version, the target environment, deployment pattern, set up the network, the domain name application deployment is completed.

So what did Choerodon do behind the scenes of building the app? Which brings us to GitOps.

GitOps

Choerodon uses Kubernetes as the base platform, packages applications through Helm Chart, and abstractions Helm Release into Kubernets custom objects, so that the state of the entire environment deployment application system can be described through Kubernetes resource object files. In Choerodon, yamL files eventually generated by application construction are stored in GitLab in the form of configuration library. By comparing the changes of configuration YAML files, the application construction state can be judged to realize the separation of business code and configuration code.

At the same time, when you create an environment on the platform, you will create a Git repository corresponding to the environment to store the deployment configuration file. After that, all operations related to deployment in the environment will be converted to deployment configuration file operations in the Git repository, and the Choerodon deployment service will be triggered to record the status. After recording the application status, Choerodon’s Agent in Kubernetes is triggered to deploy the application. Finally, the entire application deployment process can be interpreted based on the configuration library, Choerodon deployment service status record, and Choerodon Agent.

conclusion

As you can see from the above steps, K8S provides Namespace, Deployment, Service, Ingress and other base objects from the builder to external access. On this basis, Choerodon pig tooth fish platform to build a set of environmental, example, the service, the object of the domain name for mapping, including the rolling upgrade, fault tolerance, service test, etc., and in a friendly UI interface management, users can simply Kubernetes object operation, through the page to create your own application.

The Choerodon platform not only enables one-click deployment, but also provides complete test management for testing newly released applications. The knowledge management module provides the internal information sharing platform, while the report module can provide more detailed development and iteration information. The Choerodon platform improves the continuous integration and deployment capabilities of K8S, enabling greater coupling between the various base platforms and making it more suitable for enterprise DevOps.

To read more about Kubernetes, click on the blue font ▼

  • Use Kubernetes from 0 to 1
  • From 0 to 1 using Kubernetes series (2) – Installation tool introduction
  • Using Kubernetes from 0 to 1: Install Kubernetes cluster using Ansible

About the Choerodon toothfish

Choerodon is an open source enterprise services platform that builds on Kubernetes’ container orchestration and management capabilities and integrates DevOps toolchains, microservices and mobile application frameworks to help enterprises achieve agile application delivery and automated operations management. It also provides IoT, payment, data, intelligent insights, enterprise application marketplace and other business components to help enterprises focus on their business and accelerate digital transformation.

You can learn about the latest developments of toothfish, product features, and participate in community contributions through the following community channels:

  • Liverpoolfc.tv: choerodon. IO
  • BBS: forum. Choerodon. IO
  • Github:github.com/choerodon/
  • Choerodon toothfish
  • The Choerodon toothfish

Welcome to join the Choerodon Toothfish community to create an open ecological platform for enterprise digital services.