Grape city website, grape city for developers to provide professional development tools, solutions and services, enabling developers.
In the previous section, we described the deployment of the typeface Public Cloud on K8S and how to implement orchestration and administrative control between containers. In order to further realize the internal and external interaction invocation, we need to realize the function of service discovery. This is the relationship we mentioned earlier between “man and dog”.
Those of you who have done microservices may know what service discovery is. The Eureka framework in the Spring Cloud project is to accomplish this function. Its main job is to register internal services so that services in other clusters can call and access this service.
It is a reasonable guess that Kubernetes’ presence is likely to have inspired various microservice frameworks to generate service discovery mechanisms.
The corresponding modules for Service discovery in Kubernetes are Service and Ingress. Next, we’ll talk about these two functions separately.
The Service and the Ingress
Service is similar to the Service registration function.
The logic is simple: a Service is declared in Kubernetes and a VIP (virtual network) is generated. All other components in the Kubernetes cluster can access the Service through this VIP, and the Service does not change with the change of Service, as long as the creation of a lifetime.
Service
What is the content of the service? This part, like Deployment above, is determined by the selector selector. We can create a service using yamL as follows:
apiVersion: v1
kind: Service
metadata:
name: hostnames
spec:
selector:
app: hostnames
ports:
- name: default
protocol: TCP
port: 80
targetPort: 9376
Copy the code
From the introduction of the previous article, we can understand that the content that this service needs to proxy is Pod of app== HostNames. There is also a new field, “ports”, which describes the protocol, exposed port, and internal port of the proxy’s service.
We can use the sample-service.yaml file to create a service and view a service:
Kubectl apply -f sample-service.yamlCopy the code
There is a ClusterIP in this service. This IP is the VIP generated by this service. Other members in the cluster can access this service through this VIP. However, since we do not have any specific Service for this Service to proxy, requests for this IP will not be successful at this time.
So, we need to create a concrete implementation for this Service: the following sample-Deployment. yaml file creates a multi-copy Pod whose Pod function is to return its own podname:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
spec:
selector:
matchLabels:
app: hostnames
replicas: 3
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
~
Copy the code
In this code, we expose port 9376 of the container, because this is the port through which the Pod passes to the outside. Again we create and view the pod copy by executing the following commands:
Kubectl apply -f sampledeployment. YamlCopy the code
In this section you can see that the POD copy has been created successfully. At this point, according to the controller pattern I described in the previous section, the Service also has a corresponding controller that handles the Service, and it has found a Service that satisfies app== HostNames, that is, the Service is bound to the Service. At this point, we can request ClusterIP from any host in the cluster:
As you can see in this section, we made a number of requests, but each time returned a different result, because the Service does load balancing internally through the network plug-in (CNI), so we can use the load balancing function of the Service.
“Going astray” in the Learning Process
When learning about this part of the content, I have always had a misunderstanding: It is considered that Service must correspond to Deployment Pod’s orchestration controller object in order to work, so the logical relationship of Service –> Deployment –> Pods is always in mind, but this understanding is actually wrong.
In Kubernetes, each functional component does its own job and only does its own job. For example, the Service binding Pod relies on app== HostNames in the selector, which is defined in Deployment Pod. Therefore, there is no relationship between Service and Deployment because neither of them knows the other. The relationship can be described as follows:
In addition, in the previous study, I mistakenly thought that the load balancing Service was provided by Deployment. In fact, this function is handled by network plug-ins in the Service, and users can also customize the network lookup or load balancing algorithm to be used. Kubernetes gives users enough freedom.
Ingress
With the Service, our Service can be accessed freely in the cluster to achieve the communication between services, but to make our Service accessible to the end user, we need the last component, the Ingress.
Ingress is a reverse proxy Service in Kubernetes. It can resolve the configured domain name and point to our internal Service. Its definition can be implemented by yamL as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
spec:
rules:
- host: hostname.sample.com
http:
paths:
- path: /
backend:
serviceName: hostnames
servicePort: 80
Copy the code
In this code, we refer to hostname.sample.com as the domain name for the Service we just defined. By doing so, our service can be configured with a domain name for external services to access. The Ingress create command is the same as before:
kubectl apply -f sample-ingress.yaml
With this configuration, our functionality is, in principle, externally accessible. However, in the actual application, we have no local environment for testing. The local Kubernetes environment is generated through kindD, and its core is multiple Docker containers rather than multiple machines. The above content is run inside Container and is the function of using Docker to simulate Kubernetes. Therefore, this is the only functional module that cannot be successfully verified in this article.
Fully deploy a movable type application
Having studied the use of choreography controllers between Pods and the implementation of internal and external invocations and further service discovery in this section, we can now return to the question of how to successfully deploy a typeface application.
Through the introduction of the basic use process of Kubernetes, we can see the whole process of a Service from Kubernetes to Pod, Deployment through Deployment, discovery through Service, reverse proxy through Ingress. After the cooperation of these modules, Our typeface application is finally ready to deploy on this Kubernetes cluster.
I hope this picture will give you a more intuitive feeling.
conclusion
By the end of this chapter, we have fully introduced the whole process of k8S deployment in the movable type public Cloud version. The next section will bring you the final article in this series, an overview of Kubernetes, giving you an overview of the Kubernetes cluster content section and a summary of some of the deeper features.
If you are interested, don’t miss it ~ we will talk about it in the next article.