Photo by ÁLVARO MENDOZA on Unsplash
This article from the live streaming cluster SRS’s official wiki (https://github.com/ossrs/srs/wiki/v4_CN_K8s), by founding the author Yang Chengli authorization of SRS.
By Yang Jiancheng
This chapter describes how to build EdgeCluster based on K8s for high concurrency streaming media playback.
EdgeCluster can merge back to the source. For a stream, no matter how many clients play it, EdgeServer will only take one stream from the OriginServer. In this way, EdgeCluster can be extended to increase the playback capability supported, which is an important capability of CDN: high concurrency.
Note: Edge Cluster Can be classified into RTMP Edge Cluster or HTTP-FLV Edge Cluster according to the protocol used by the client. For details, please refer to related Wiki.
For self-generated source sites, there is not that much play, so why not use SRS single source directly to provide the service instead of EdgeCluster? The main scenarios are analyzed as follows:
- Prevent Origin overload, even if the number of streams pushed is very small and the number of streams played is not large. For example, when the CDN is used after the self-built source station, one CDN stream may have multiple source connections. Using Edge can protect Origin from Origin problems caused by the back source, at most, some Edge is suspended by the back source.
- Multiple Edge clusters can be used (only srS-edge-service can be added), and different SLBS can be exposed externally, limiting the flow of each SLB to prevent interference between CDN. This ensures that some CDNS are available instead of all being unavailable when Origin dies.
- By separating Origin key services and assigning downstream streaming media distribution services to Edge Cluster, Origin can perform key services such as slicing, DVR, and authentication to avoid interference between services.
In this scenario, compare K8s with traditional usage:
Step1: create a stateless application k8s deployment, run SRSOrigin Server and Nginx, HLS write shared Volume:
cat <<EOF | kubectl apply -f -apiVersion: v1kind: Servicemetadata: name: srs-internal-origin-servicespec: type: ClusterIP selector: app: srs-origin ports: - name: srs-internal-origin-service-1935-1935 port: 1935 protocol: TCP targetPort: 1935EOFCopy the code
Step2: Create a service k8S service to provide the Origin service based on ClusterIP for EdgeServer to call:
cat <<EOF | kubectl apply -f -apiVersion: v1kind: Servicemetadata: name: srs-internal-origin-servicespec: type: ClusterIP selector: app: srs-origin ports: - name: srs-internal-origin-service-1935-1935 port: 1935 protocol: TCP targetPort: 1935EOFCopy the code
Note: OriginServer Provides the streaming media source service in a cluster. The internal domain name is srs-internal-Origin-service. EdgeServer connects to the OriginServer through this domain name.
Step3: Create a service k8S service, which provides HTTP service based on SLB, and Nginx provides HLS service externally:
cat <<EOF | kubectl apply -f -apiVersion: v1kind: Servicemetadata: name: srs-http-servicespec: type: LoadBalancer selector: app: srs-origin ports: - name: srs-http-service-80-80 port: 80 protocol: TCP targetPort: 80EOFCopy the code
Note: Nginx reads slices generated by SRSOrigin through SharedVolume and provides HLS service externally.
Note: Here we choose ACK to automatically create SLB and EIP, or you can manually specify SLB, refer to specify purchase SLB and EIP.
Step4: Create a k8S ConfigMap, which stores the configuration file used by SRSEdge Server:
cat <<EOF | kubectl apply -f -apiVersion: v1kind: ConfigMapmetadata: name: srs-edge-configdata: srs.conf: |- listen 1935; max_connections 1000; daemon off; http_api { enabled on; listen 1985; } http_server { enabled on; listen 8080; } vhost __defaultVhost__ { cluster { mode remote; origin srs-internal-origin-service; } http_remux { enabled on; } }EOFCopy the code
Note: In the Edge Server configuration, the internal domain name srs-internal-origin-service registered with Service connects to the OriginServer.
Step5: create a stateless application k8s deployment and run multiple SRSEdge servers:
cat <<EOF | kubectl apply -f -apiVersion: apps/v1kind: Deploymentmetadata: name: srs-edge-deployment labels: app: srs-edgespec: replicas: 3 selector: matchLabels: app: srs-edge template: metadata: labels: app: srs-edge spec: volumes: - name: config-volume configMap: name: srs-edge-config containers: - name: srs image: ossrs/srs:3 imagePullPolicy: IfNotPresent ports: - containerPort: 1935 - containerPort: 1985 - containerPort: 8080 volumeMounts: - name: config-volume mountPath: /usr/local/srs/confEOFCopy the code
Step6: Create a service K8S service, and Edge Cluster provides external streaming media services based on SLB:
cat <<EOF | kubectl apply -f -apiVersion: v1kind: Servicemetadata: name: srs-edge-servicespec: type: LoadBalancer selector: app: srs-edge ports: - name: srs-edge-service-1935-1935 port: 1935 protocol: TCP targetPort: 1935 - name: srs-edge-service-1985-1985 port: 1985 protocol: TCP targetPort: 1985 - name: srs-edge-service-8080-8080 port: 8080 protocol: TCP targetPort: 8080EOFCopy the code
Note: Here we choose ACK to automatically create SLB and EIP, or you can manually specify SLB, refer to specify purchase SLB and EIP.
Step7: Accomplished. You can now push and pull streams, where HLS streams can be played from Nginx(80), RTMP and HTTP-FLV streams can be played from SRS:
- The Publish RTMP to RTMP: / / 28.170.32.118 / live/livestream
- Play RTMP from RTMP: / / 28.170.32.118 / live/livestream
- Play the HTTP – FLV from http://28.170.32.118:8080/live/livestream.flv
- Play HLS from http://28.170.32.118/live/livestream.m3u8
Kubectl get SVC /srs-http-service kubectl get SVC /srs-edge-service kubectl get SVC /srs-edge-service kubectl get SVC /srs-edge-service
Note: If SLB and EIP are created automatically, then the IP of HLS and RTMP/HTTP-FLV are different, you can choose to specify SLB manually, both services can use the same SLB, refer to the specified purchase of SLB and EIP.