Kubernetes Ingress exposes services in a cluster to external access through HTTP/HTTPS, and defines routes for services based on path matching rules. Ingress does not support TCP/UDP services as well. If we use websockets or sockets in our service and need to expose them to external access, how do we configure them in Kubernetes?
There are basically two ways [see Reference 1] :
- Using NodePort, the node IP address is used to access the port exposed by NodePort
- Use ClusterIp + Ingress + ConfigMap
NodePort is used to directly expose ports, which requires the node to have an external IP address. In addition, this method may bypass the existing TLS, causing security problems.
ClusterIp can be accessed only within the cluster and is exposed by proxy through the Ingress. However, for TCP/UDP, the Ingress does not support direct proxy and requires ConfigMap mapping.
NodePort is relatively simple. This document describes the ClusterIp + Ingress + ConfigMap method.
Example Create a ClusterIp service
Suppose there is a Websocket/Socket service that exposes port 8828. Define ClusterIp for the service as follows (no type is specified, default is ClusterIp).
apiVersion: v1
kind: Service
metadata:
name: my-websocket-svc
namespace: develop
spec:
ports:
- name: socket
port: 8828
targetPort: 8828
protocol: TCP
selector:
app: my-websocket
Copy the code
Create ClusterIp,
[root@kmaster k8s-deploy]# kubectl apply -f my-websocket-svc.yaml
Copy the code
Create ConfigMap
Create a ConfigMap in the namespace where ingress-nginx-Controller resides (if you already have a ConfigMap, add the following data entry to the Data section of the existing ConfigMap)
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
8828: "develop/my-websocket-svc:8828"
Copy the code
The format of data is :< namespace/service name>:
:[PROXY]:[PROXY]. [PROXY]:[PROXY] is optional. Port 8828 of the host is mapped to port 8828 of the My-websocket-svC service of the Develop Namespace.
Create ConfigMap,
[root@kmaster k8s-deploy]# kubectl apply -f tcp-service-configmap.yaml
Copy the code
Configuration ingress – nginx – controller
Modify ingress-nginx-controller configuration
[root@kmaster ~]# kubectl edit deploy ingress-nginx-controller -n ingress-nginx
Copy the code
In the spec. The template. The spec. Containers [] args [] part of the add – TCP – services – configmap = $(POD_NAMESPACE)/TCP – services (or for the UDP, –udp-services-configmap=$(POD_NAMESPACE)/udp-services), as shown below
In the spec. The template. The spec. Containers [] ports [] part of the add port mapping, as shown in figure
As it turns out, this part of the port mapping configuration also works
Save, apply configuration updates, and nginx-Ingress-controller will automatically restart the Pod for the configuration to take effect.
validation
Run the following command on the nginx-ingress-controller Pod node to check whether the TCP port is monitored.
As shown above, port 8828 is listened on by nginx-ingress.
For Websocket applications, wSCat can be used for debugging
C:\Users\Administrator>wscat -c ws:// domain :8828 Connected (press CTRL+C to quit)>
Copy the code
Wscat install: NPM install -g wscat
other
- ConfigMap namesapce is the same as nginx-ingress-controller
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
In the$(POD_NAMESPACE)
Change to ConfigMap specific namesapce - If nginx-ingress-controller is bound to a node, the restart may fail (because port allocation conflicts).
kubectl delete deploy ingress-nginx-controller -n ingress-nginx
), create a new (kubectl apply -f nginx-ingress.yaml
), this operation may affect service availability. Be careful in the production environment - If the configuration does not take effect, view nginx-ingress-controller Pod logs to locate the fault
kubectl logs ingress-nginx-controller-58fdbbc68d-wqtlr -n ingress-nginx
Reference Documents:
- www.ibm.com/support/kno…
- Github.com/kubernetes/…
Welcome to pay attention to the author’s public account: Halfway Rain Song, check out more technical dry goods articles